00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4077 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3667 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.057 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.058 The recommended git tool is: git 00:00:00.058 using credential 00000000-0000-0000-0000-000000000002 00:00:00.062 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.084 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.130 Using shallow fetch with depth 1 00:00:00.130 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.130 > git --version # timeout=10 00:00:00.188 > git --version # 'git version 2.39.2' 00:00:00.188 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.249 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.249 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.774 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.785 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.796 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.796 > git config core.sparsecheckout # timeout=10 00:00:04.808 > git read-tree -mu HEAD # timeout=10 00:00:04.822 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.846 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.847 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.950 [Pipeline] Start of Pipeline 00:00:04.965 [Pipeline] library 00:00:04.968 Loading library shm_lib@master 00:00:04.968 Library shm_lib@master is cached. Copying from home. 00:00:04.986 [Pipeline] node 00:00:05.007 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.008 [Pipeline] { 00:00:05.021 [Pipeline] catchError 00:00:05.022 [Pipeline] { 00:00:05.037 [Pipeline] wrap 00:00:05.044 [Pipeline] { 00:00:05.050 [Pipeline] stage 00:00:05.051 [Pipeline] { (Prologue) 00:00:05.066 [Pipeline] echo 00:00:05.067 Node: VM-host-SM38 00:00:05.074 [Pipeline] cleanWs 00:00:05.085 [WS-CLEANUP] Deleting project workspace... 00:00:05.085 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.093 [WS-CLEANUP] done 00:00:05.284 [Pipeline] setCustomBuildProperty 00:00:05.357 [Pipeline] httpRequest 00:00:05.749 [Pipeline] echo 00:00:05.751 Sorcerer 10.211.164.101 is alive 00:00:05.760 [Pipeline] retry 00:00:05.763 [Pipeline] { 00:00:05.776 [Pipeline] httpRequest 00:00:05.782 HttpMethod: GET 00:00:05.782 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.783 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.785 Response Code: HTTP/1.1 200 OK 00:00:05.786 Success: Status code 200 is in the accepted range: 200,404 00:00:05.787 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.316 [Pipeline] } 00:00:06.332 [Pipeline] // retry 00:00:06.339 [Pipeline] sh 00:00:06.622 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.639 [Pipeline] httpRequest 00:00:09.580 [Pipeline] echo 00:00:09.582 Sorcerer 10.211.164.101 is alive 00:00:09.591 [Pipeline] retry 00:00:09.594 [Pipeline] { 00:00:09.608 [Pipeline] httpRequest 00:00:09.614 HttpMethod: GET 00:00:09.615 URL: http://10.211.164.101/packages/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:09.615 Sending request to url: http://10.211.164.101/packages/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:09.622 Response Code: HTTP/1.1 200 OK 00:00:09.623 Success: Status code 200 is in the accepted range: 200,404 00:00:09.623 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:32.216 [Pipeline] } 00:00:32.234 [Pipeline] // retry 00:00:32.242 [Pipeline] sh 00:00:32.530 + tar --no-same-owner -xf spdk_2a91567e48d607d62a2d552252c20d3930f5783f.tar.gz 00:00:35.088 [Pipeline] sh 00:00:35.374 + git -C spdk log --oneline -n5 00:00:35.374 2a91567e4 CHANGELOG.md: corrected typo 00:00:35.374 6c35d974e lib/nvme: destruct controllers that failed init asynchronously 00:00:35.374 414f91a0c lib/nvmf: Fix double free of connect request 00:00:35.374 d8f6e798d nvme: Fix discovery loop when target has no entry 00:00:35.374 ff2e6bfe4 lib/lvol: cluster size must be a multiple of bs_dev->blocklen 00:00:35.397 [Pipeline] withCredentials 00:00:35.411 > git --version # timeout=10 00:00:35.425 > git --version # 'git version 2.39.2' 00:00:35.447 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:35.449 [Pipeline] { 00:00:35.459 [Pipeline] retry 00:00:35.461 [Pipeline] { 00:00:35.477 [Pipeline] sh 00:00:35.765 + git ls-remote http://dpdk.org/git/dpdk main 00:00:35.780 [Pipeline] } 00:00:35.798 [Pipeline] // retry 00:00:35.804 [Pipeline] } 00:00:35.821 [Pipeline] // withCredentials 00:00:35.831 [Pipeline] httpRequest 00:00:37.005 [Pipeline] echo 00:00:37.007 Sorcerer 10.211.164.101 is alive 00:00:37.016 [Pipeline] retry 00:00:37.018 [Pipeline] { 00:00:37.032 [Pipeline] httpRequest 00:00:37.037 HttpMethod: GET 00:00:37.038 URL: http://10.211.164.101/packages/dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:00:37.039 Sending request to url: http://10.211.164.101/packages/dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:00:37.049 Response Code: HTTP/1.1 200 OK 00:00:37.050 Success: Status code 200 is in the accepted range: 200,404 00:00:37.050 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:01:28.571 [Pipeline] } 00:01:28.589 [Pipeline] // retry 00:01:28.597 [Pipeline] sh 00:01:28.886 + tar --no-same-owner -xf dpdk_f4ccce58c1a33cb41e1e820da504698437987efc.tar.gz 00:01:30.290 [Pipeline] sh 00:01:30.620 + git -C dpdk log --oneline -n5 00:01:30.620 f4ccce58c1 doc: allow warnings in Sphinx for DTS 00:01:30.620 0c0cd5ffb0 version: 24.11-rc3 00:01:30.620 8c9a7471a0 dts: add checksum offload test suite 00:01:30.620 bee7cf823c dts: add checksum offload to testpmd shell 00:01:30.620 2eef9a80df dts: add dynamic queue test suite 00:01:30.641 [Pipeline] writeFile 00:01:30.657 [Pipeline] sh 00:01:30.944 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:30.957 [Pipeline] sh 00:01:31.237 + cat autorun-spdk.conf 00:01:31.237 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.237 SPDK_TEST_NVME=1 00:01:31.237 SPDK_TEST_FTL=1 00:01:31.237 SPDK_TEST_ISAL=1 00:01:31.237 SPDK_RUN_ASAN=1 00:01:31.237 SPDK_RUN_UBSAN=1 00:01:31.237 SPDK_TEST_XNVME=1 00:01:31.237 SPDK_TEST_NVME_FDP=1 00:01:31.237 SPDK_TEST_NATIVE_DPDK=main 00:01:31.237 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.237 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.245 RUN_NIGHTLY=1 00:01:31.247 [Pipeline] } 00:01:31.263 [Pipeline] // stage 00:01:31.278 [Pipeline] stage 00:01:31.281 [Pipeline] { (Run VM) 00:01:31.294 [Pipeline] sh 00:01:31.582 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.582 + echo 'Start stage prepare_nvme.sh' 00:01:31.582 Start stage prepare_nvme.sh 00:01:31.582 + [[ -n 3 ]] 00:01:31.582 + disk_prefix=ex3 00:01:31.582 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:31.582 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:31.582 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:31.582 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.582 ++ SPDK_TEST_NVME=1 00:01:31.582 ++ SPDK_TEST_FTL=1 00:01:31.582 ++ SPDK_TEST_ISAL=1 00:01:31.582 ++ SPDK_RUN_ASAN=1 00:01:31.582 ++ SPDK_RUN_UBSAN=1 00:01:31.582 ++ SPDK_TEST_XNVME=1 00:01:31.582 ++ SPDK_TEST_NVME_FDP=1 00:01:31.582 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:31.582 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:31.582 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.582 ++ RUN_NIGHTLY=1 00:01:31.582 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:31.582 + nvme_files=() 00:01:31.582 + declare -A nvme_files 00:01:31.582 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.582 + nvme_files['nvme.img']=5G 00:01:31.582 + nvme_files['nvme-cmb.img']=5G 00:01:31.582 + nvme_files['nvme-multi0.img']=4G 00:01:31.582 + nvme_files['nvme-multi1.img']=4G 00:01:31.582 + nvme_files['nvme-multi2.img']=4G 00:01:31.582 + nvme_files['nvme-openstack.img']=8G 00:01:31.582 + nvme_files['nvme-zns.img']=5G 00:01:31.582 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.582 + (( SPDK_TEST_FTL == 1 )) 00:01:31.582 + nvme_files["nvme-ftl.img"]=6G 00:01:31.582 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.582 + nvme_files["nvme-fdp.img"]=1G 00:01:31.582 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.582 + for nvme in "${!nvme_files[@]}" 00:01:31.582 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:01:31.582 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.582 + for nvme in "${!nvme_files[@]}" 00:01:31.582 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:01:31.844 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:31.844 + for nvme in "${!nvme_files[@]}" 00:01:31.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:01:31.844 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.844 + for nvme in "${!nvme_files[@]}" 00:01:31.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:01:31.844 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:31.844 + for nvme in "${!nvme_files[@]}" 00:01:31.844 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:01:31.844 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.105 + for nvme in "${!nvme_files[@]}" 00:01:32.105 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:01:32.105 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.105 + for nvme in "${!nvme_files[@]}" 00:01:32.105 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:01:32.367 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.367 + for nvme in "${!nvme_files[@]}" 00:01:32.367 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:01:32.629 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:32.629 + for nvme in "${!nvme_files[@]}" 00:01:32.629 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:01:32.629 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.629 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:01:32.629 + echo 'End stage prepare_nvme.sh' 00:01:32.629 End stage prepare_nvme.sh 00:01:32.641 [Pipeline] sh 00:01:32.922 + DISTRO=fedora39 00:01:32.922 + CPUS=10 00:01:32.922 + RAM=12288 00:01:32.922 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.922 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:32.922 00:01:32.922 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:32.922 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:32.922 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:32.922 HELP=0 00:01:32.922 DRY_RUN=0 00:01:32.922 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:01:32.922 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:32.922 NVME_AUTO_CREATE=0 00:01:32.922 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:01:32.922 NVME_CMB=,,,, 00:01:32.922 NVME_PMR=,,,, 00:01:32.922 NVME_ZNS=,,,, 00:01:32.922 NVME_MS=true,,,, 00:01:32.922 NVME_FDP=,,,on, 00:01:32.922 SPDK_VAGRANT_DISTRO=fedora39 00:01:32.922 SPDK_VAGRANT_VMCPU=10 00:01:32.922 SPDK_VAGRANT_VMRAM=12288 00:01:32.922 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.922 SPDK_VAGRANT_HTTP_PROXY= 00:01:32.922 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.922 SPDK_OPENSTACK_NETWORK=0 00:01:32.922 VAGRANT_PACKAGE_BOX=0 00:01:32.922 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.922 FORCE_DISTRO=true 00:01:32.922 VAGRANT_BOX_VERSION= 00:01:32.922 EXTRA_VAGRANTFILES= 00:01:32.922 NIC_MODEL=e1000 00:01:32.922 00:01:32.922 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:32.922 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:35.462 Bringing machine 'default' up with 'libvirt' provider... 00:01:35.722 ==> default: Creating image (snapshot of base box volume). 00:01:35.722 ==> default: Creating domain with the following settings... 00:01:35.981 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732582018_833c9d5b5ea6d9cfd1e9 00:01:35.981 ==> default: -- Domain type: kvm 00:01:35.981 ==> default: -- Cpus: 10 00:01:35.981 ==> default: -- Feature: acpi 00:01:35.981 ==> default: -- Feature: apic 00:01:35.981 ==> default: -- Feature: pae 00:01:35.981 ==> default: -- Memory: 12288M 00:01:35.981 ==> default: -- Memory Backing: hugepages: 00:01:35.981 ==> default: -- Management MAC: 00:01:35.981 ==> default: -- Loader: 00:01:35.981 ==> default: -- Nvram: 00:01:35.981 ==> default: -- Base box: spdk/fedora39 00:01:35.981 ==> default: -- Storage pool: default 00:01:35.981 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732582018_833c9d5b5ea6d9cfd1e9.img (20G) 00:01:35.981 ==> default: -- Volume Cache: default 00:01:35.981 ==> default: -- Kernel: 00:01:35.981 ==> default: -- Initrd: 00:01:35.981 ==> default: -- Graphics Type: vnc 00:01:35.981 ==> default: -- Graphics Port: -1 00:01:35.981 ==> default: -- Graphics IP: 127.0.0.1 00:01:35.981 ==> default: -- Graphics Password: Not defined 00:01:35.981 ==> default: -- Video Type: cirrus 00:01:35.981 ==> default: -- Video VRAM: 9216 00:01:35.981 ==> default: -- Sound Type: 00:01:35.981 ==> default: -- Keymap: en-us 00:01:35.981 ==> default: -- TPM Path: 00:01:35.981 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:35.981 ==> default: -- Command line args: 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:35.981 ==> default: -> value=-drive, 00:01:35.981 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:35.981 ==> default: -> value=-drive, 00:01:35.981 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:35.981 ==> default: -> value=-drive, 00:01:35.981 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.981 ==> default: -> value=-drive, 00:01:35.981 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.981 ==> default: -> value=-drive, 00:01:35.981 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:35.981 ==> default: -> value=-drive, 00:01:35.981 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:35.981 ==> default: -> value=-device, 00:01:35.981 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:35.981 ==> default: Creating shared folders metadata... 00:01:35.981 ==> default: Starting domain. 00:01:37.892 ==> default: Waiting for domain to get an IP address... 00:01:59.860 ==> default: Waiting for SSH to become available... 00:01:59.860 ==> default: Configuring and enabling network interfaces... 00:02:02.408 default: SSH address: 192.168.121.193:22 00:02:02.408 default: SSH username: vagrant 00:02:02.408 default: SSH auth method: private key 00:02:04.312 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:10.938 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:17.526 ==> default: Mounting SSHFS shared folder... 00:02:18.984 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:18.984 ==> default: Checking Mount.. 00:02:20.371 ==> default: Folder Successfully Mounted! 00:02:20.371 00:02:20.371 SUCCESS! 00:02:20.371 00:02:20.371 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:20.371 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:20.371 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:20.371 00:02:20.382 [Pipeline] } 00:02:20.398 [Pipeline] // stage 00:02:20.409 [Pipeline] dir 00:02:20.410 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:20.412 [Pipeline] { 00:02:20.427 [Pipeline] catchError 00:02:20.429 [Pipeline] { 00:02:20.443 [Pipeline] sh 00:02:20.729 + vagrant ssh-config --host vagrant 00:02:20.730 + sed -ne '/^Host/,$p' 00:02:20.730 + tee ssh_conf 00:02:23.280 Host vagrant 00:02:23.280 HostName 192.168.121.193 00:02:23.280 User vagrant 00:02:23.280 Port 22 00:02:23.280 UserKnownHostsFile /dev/null 00:02:23.280 StrictHostKeyChecking no 00:02:23.280 PasswordAuthentication no 00:02:23.280 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:23.280 IdentitiesOnly yes 00:02:23.280 LogLevel FATAL 00:02:23.280 ForwardAgent yes 00:02:23.280 ForwardX11 yes 00:02:23.280 00:02:23.295 [Pipeline] withEnv 00:02:23.297 [Pipeline] { 00:02:23.311 [Pipeline] sh 00:02:23.598 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:23.598 source /etc/os-release 00:02:23.598 [[ -e /image.version ]] && img=$(< /image.version) 00:02:23.598 # Minimal, systemd-like check. 00:02:23.598 if [[ -e /.dockerenv ]]; then 00:02:23.598 # Clear garbage from the node'\''s name: 00:02:23.598 # agt-er_autotest_547-896 -> autotest_547-896 00:02:23.598 # $HOSTNAME is the actual container id 00:02:23.598 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:23.598 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:23.598 # We can assume this is a mount from a host where container is running, 00:02:23.598 # so fetch its hostname to easily identify the target swarm worker. 00:02:23.598 container="$(< /etc/hostname) ($agent)" 00:02:23.598 else 00:02:23.598 # Fallback 00:02:23.598 container=$agent 00:02:23.598 fi 00:02:23.598 fi 00:02:23.598 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:23.598 ' 00:02:23.872 [Pipeline] } 00:02:23.887 [Pipeline] // withEnv 00:02:23.896 [Pipeline] setCustomBuildProperty 00:02:23.911 [Pipeline] stage 00:02:23.913 [Pipeline] { (Tests) 00:02:23.931 [Pipeline] sh 00:02:24.217 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:24.494 [Pipeline] sh 00:02:24.778 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:25.055 [Pipeline] timeout 00:02:25.056 Timeout set to expire in 50 min 00:02:25.057 [Pipeline] { 00:02:25.070 [Pipeline] sh 00:02:25.354 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:25.925 HEAD is now at 2a91567e4 CHANGELOG.md: corrected typo 00:02:25.938 [Pipeline] sh 00:02:26.219 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:26.492 [Pipeline] sh 00:02:26.793 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:27.100 [Pipeline] sh 00:02:27.387 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:27.649 ++ readlink -f spdk_repo 00:02:27.649 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:27.649 + [[ -n /home/vagrant/spdk_repo ]] 00:02:27.649 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:27.649 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:27.649 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:27.649 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:27.649 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:27.649 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:27.649 + cd /home/vagrant/spdk_repo 00:02:27.649 + source /etc/os-release 00:02:27.649 ++ NAME='Fedora Linux' 00:02:27.649 ++ VERSION='39 (Cloud Edition)' 00:02:27.649 ++ ID=fedora 00:02:27.649 ++ VERSION_ID=39 00:02:27.649 ++ VERSION_CODENAME= 00:02:27.649 ++ PLATFORM_ID=platform:f39 00:02:27.649 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:27.649 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:27.649 ++ LOGO=fedora-logo-icon 00:02:27.649 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:27.649 ++ HOME_URL=https://fedoraproject.org/ 00:02:27.649 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:27.649 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:27.649 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:27.649 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:27.649 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:27.649 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:27.649 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:27.649 ++ SUPPORT_END=2024-11-12 00:02:27.649 ++ VARIANT='Cloud Edition' 00:02:27.649 ++ VARIANT_ID=cloud 00:02:27.649 + uname -a 00:02:27.649 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:27.649 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:27.911 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:28.486 Hugepages 00:02:28.486 node hugesize free / total 00:02:28.486 node0 1048576kB 0 / 0 00:02:28.486 node0 2048kB 0 / 0 00:02:28.486 00:02:28.486 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:28.486 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:28.486 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:28.486 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:28.486 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:28.486 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:28.486 + rm -f /tmp/spdk-ld-path 00:02:28.486 + source autorun-spdk.conf 00:02:28.486 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.486 ++ SPDK_TEST_NVME=1 00:02:28.486 ++ SPDK_TEST_FTL=1 00:02:28.486 ++ SPDK_TEST_ISAL=1 00:02:28.486 ++ SPDK_RUN_ASAN=1 00:02:28.486 ++ SPDK_RUN_UBSAN=1 00:02:28.486 ++ SPDK_TEST_XNVME=1 00:02:28.486 ++ SPDK_TEST_NVME_FDP=1 00:02:28.486 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:28.486 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:28.486 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.486 ++ RUN_NIGHTLY=1 00:02:28.486 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:28.486 + [[ -n '' ]] 00:02:28.486 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:28.486 + for M in /var/spdk/build-*-manifest.txt 00:02:28.486 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:28.486 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.486 + for M in /var/spdk/build-*-manifest.txt 00:02:28.486 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:28.486 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.486 + for M in /var/spdk/build-*-manifest.txt 00:02:28.486 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:28.486 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:28.486 ++ uname 00:02:28.486 + [[ Linux == \L\i\n\u\x ]] 00:02:28.486 + sudo dmesg -T 00:02:28.486 + sudo dmesg --clear 00:02:28.486 + dmesg_pid=5761 00:02:28.486 + [[ Fedora Linux == FreeBSD ]] 00:02:28.486 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.486 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:28.487 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:28.487 + [[ -x /usr/src/fio-static/fio ]] 00:02:28.487 + sudo dmesg -Tw 00:02:28.487 + export FIO_BIN=/usr/src/fio-static/fio 00:02:28.487 + FIO_BIN=/usr/src/fio-static/fio 00:02:28.487 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:28.487 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:28.487 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:28.487 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:28.487 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:28.487 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:28.487 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:28.487 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:28.487 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:28.748 00:47:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:28.748 00:47:51 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:28.748 00:47:51 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:28.748 00:47:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:28.749 00:47:51 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:28.749 00:47:51 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:28.749 00:47:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:28.749 00:47:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:28.749 00:47:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:28.749 00:47:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:28.749 00:47:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:28.749 00:47:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.749 00:47:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.749 00:47:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.749 00:47:51 -- paths/export.sh@5 -- $ export PATH 00:02:28.749 00:47:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:28.749 00:47:51 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:28.749 00:47:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:28.749 00:47:51 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732582071.XXXXXX 00:02:28.749 00:47:51 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732582071.Cn267z 00:02:28.749 00:47:51 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:28.749 00:47:51 -- common/autobuild_common.sh@499 -- $ '[' -n main ']' 00:02:28.749 00:47:51 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:28.749 00:47:51 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:28.749 00:47:51 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:28.749 00:47:51 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:28.749 00:47:51 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:28.749 00:47:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:28.749 00:47:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.749 00:47:51 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:28.749 00:47:51 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:28.749 00:47:51 -- pm/common@17 -- $ local monitor 00:02:28.749 00:47:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.749 00:47:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:28.749 00:47:51 -- pm/common@25 -- $ sleep 1 00:02:28.749 00:47:51 -- pm/common@21 -- $ date +%s 00:02:28.749 00:47:51 -- pm/common@21 -- $ date +%s 00:02:28.749 00:47:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732582071 00:02:28.749 00:47:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732582071 00:02:28.749 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732582071_collect-cpu-load.pm.log 00:02:28.749 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732582071_collect-vmstat.pm.log 00:02:29.694 00:47:52 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:29.694 00:47:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:29.694 00:47:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:29.694 00:47:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:29.694 00:47:52 -- spdk/autobuild.sh@16 -- $ date -u 00:02:29.694 Tue Nov 26 12:47:52 AM UTC 2024 00:02:29.694 00:47:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:29.694 v25.01-pre-240-g2a91567e4 00:02:29.694 00:47:52 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:29.694 00:47:52 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:29.694 00:47:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:29.694 00:47:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:29.694 00:47:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.694 ************************************ 00:02:29.694 START TEST asan 00:02:29.694 ************************************ 00:02:29.694 using asan 00:02:29.694 00:47:52 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:29.694 00:02:29.694 real 0m0.000s 00:02:29.694 user 0m0.000s 00:02:29.694 sys 0m0.000s 00:02:29.694 00:47:52 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:29.694 00:47:52 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:29.956 ************************************ 00:02:29.956 END TEST asan 00:02:29.956 ************************************ 00:02:29.956 00:47:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:29.956 00:47:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:29.956 00:47:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:29.956 00:47:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:29.956 00:47:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.956 ************************************ 00:02:29.956 START TEST ubsan 00:02:29.956 ************************************ 00:02:29.956 using ubsan 00:02:29.956 00:47:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:29.956 00:02:29.956 real 0m0.000s 00:02:29.956 user 0m0.000s 00:02:29.956 sys 0m0.000s 00:02:29.956 ************************************ 00:02:29.956 END TEST ubsan 00:02:29.956 ************************************ 00:02:29.956 00:47:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:29.956 00:47:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:29.956 00:47:52 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:29.956 00:47:52 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:29.956 00:47:52 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:29.956 00:47:52 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:29.956 00:47:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:29.956 00:47:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:29.956 ************************************ 00:02:29.956 START TEST build_native_dpdk 00:02:29.956 ************************************ 00:02:29.956 00:47:52 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:29.956 00:47:52 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:29.956 f4ccce58c1 doc: allow warnings in Sphinx for DTS 00:02:29.956 0c0cd5ffb0 version: 24.11-rc3 00:02:29.956 8c9a7471a0 dts: add checksum offload test suite 00:02:29.956 bee7cf823c dts: add checksum offload to testpmd shell 00:02:29.956 2eef9a80df dts: add dynamic queue test suite 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc3 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc3 21.11.0 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 21.11.0 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:29.957 patching file config/rte_config.h 00:02:29.957 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc3 24.07.0 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 24.07.0 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc3 24.07.0 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc3 '>=' 24.07.0 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:29.957 00:47:52 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1 00:02:29.957 patching file drivers/bus/pci/linux/pci_uio.c 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:29.957 00:47:52 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:29.958 00:47:52 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:36.546 The Meson build system 00:02:36.546 Version: 1.5.0 00:02:36.546 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:36.546 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:36.546 Build type: native build 00:02:36.546 Project name: DPDK 00:02:36.546 Project version: 24.11.0-rc3 00:02:36.546 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.546 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:36.546 Host machine cpu family: x86_64 00:02:36.546 Host machine cpu: x86_64 00:02:36.546 Message: ## Building in Developer Mode ## 00:02:36.546 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:36.546 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:36.546 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:36.546 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:36.546 Program cat found: YES (/usr/bin/cat) 00:02:36.546 config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:36.546 Compiler for C supports arguments -march=native: YES 00:02:36.546 Checking for size of "void *" : 8 00:02:36.546 Checking for size of "void *" : 8 (cached) 00:02:36.546 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:36.546 Library m found: YES 00:02:36.546 Library numa found: YES 00:02:36.546 Has header "numaif.h" : YES 00:02:36.546 Library fdt found: NO 00:02:36.546 Library execinfo found: NO 00:02:36.546 Has header "execinfo.h" : YES 00:02:36.546 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.546 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:36.546 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:36.546 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:36.546 Run-time dependency openssl found: YES 3.1.1 00:02:36.546 Run-time dependency libpcap found: YES 1.10.4 00:02:36.546 Has header "pcap.h" with dependency libpcap: YES 00:02:36.546 Compiler for C supports arguments -Wcast-qual: YES 00:02:36.546 Compiler for C supports arguments -Wdeprecated: YES 00:02:36.546 Compiler for C supports arguments -Wformat: YES 00:02:36.546 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:36.546 Compiler for C supports arguments -Wformat-security: NO 00:02:36.546 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.546 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:36.546 Compiler for C supports arguments -Wnested-externs: YES 00:02:36.546 Compiler for C supports arguments -Wold-style-definition: YES 00:02:36.546 Compiler for C supports arguments -Wpointer-arith: YES 00:02:36.546 Compiler for C supports arguments -Wsign-compare: YES 00:02:36.546 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:36.546 Compiler for C supports arguments -Wundef: YES 00:02:36.546 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.546 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:36.546 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.546 Program objdump found: YES (/usr/bin/objdump) 00:02:36.546 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:02:36.546 Checking if "AVX512 checking" compiles: YES 00:02:36.546 Fetching value of define "__AVX512F__" : 1 00:02:36.546 Fetching value of define "__AVX512BW__" : 1 00:02:36.546 Fetching value of define "__AVX512DQ__" : 1 00:02:36.546 Fetching value of define "__AVX512VL__" : 1 00:02:36.546 Fetching value of define "__SSE4_2__" : 1 00:02:36.546 Fetching value of define "__AES__" : 1 00:02:36.546 Fetching value of define "__AVX__" : 1 00:02:36.546 Fetching value of define "__AVX2__" : 1 00:02:36.546 Fetching value of define "__AVX512BW__" : 1 00:02:36.546 Fetching value of define "__AVX512CD__" : 1 00:02:36.546 Fetching value of define "__AVX512DQ__" : 1 00:02:36.546 Fetching value of define "__AVX512F__" : 1 00:02:36.546 Fetching value of define "__AVX512VL__" : 1 00:02:36.546 Fetching value of define "__PCLMUL__" : 1 00:02:36.546 Fetching value of define "__RDRND__" : 1 00:02:36.546 Fetching value of define "__RDSEED__" : 1 00:02:36.546 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:36.546 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:36.546 Message: lib/log: Defining dependency "log" 00:02:36.546 Message: lib/kvargs: Defining dependency "kvargs" 00:02:36.546 Message: lib/argparse: Defining dependency "argparse" 00:02:36.546 Message: lib/telemetry: Defining dependency "telemetry" 00:02:36.546 Checking for function "pthread_attr_setaffinity_np" : YES 00:02:36.546 Checking for function "getentropy" : NO 00:02:36.546 Message: lib/eal: Defining dependency "eal" 00:02:36.546 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:36.546 Message: lib/ring: Defining dependency "ring" 00:02:36.546 Message: lib/rcu: Defining dependency "rcu" 00:02:36.546 Message: lib/mempool: Defining dependency "mempool" 00:02:36.546 Message: lib/mbuf: Defining dependency "mbuf" 00:02:36.546 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:36.546 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:36.546 Compiler for C supports arguments -mpclmul: YES 00:02:36.546 Compiler for C supports arguments -maes: YES 00:02:36.546 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:36.546 Message: lib/net: Defining dependency "net" 00:02:36.546 Message: lib/meter: Defining dependency "meter" 00:02:36.546 Message: lib/ethdev: Defining dependency "ethdev" 00:02:36.546 Message: lib/pci: Defining dependency "pci" 00:02:36.546 Message: lib/cmdline: Defining dependency "cmdline" 00:02:36.546 Message: lib/metrics: Defining dependency "metrics" 00:02:36.546 Message: lib/hash: Defining dependency "hash" 00:02:36.546 Message: lib/timer: Defining dependency "timer" 00:02:36.546 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.546 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:36.546 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:36.546 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.546 Message: lib/acl: Defining dependency "acl" 00:02:36.546 Message: lib/bbdev: Defining dependency "bbdev" 00:02:36.546 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:36.546 Run-time dependency libelf found: YES 0.191 00:02:36.546 Message: lib/bpf: Defining dependency "bpf" 00:02:36.546 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:36.546 Message: lib/compressdev: Defining dependency "compressdev" 00:02:36.546 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:36.546 Message: lib/distributor: Defining dependency "distributor" 00:02:36.546 Message: lib/dmadev: Defining dependency "dmadev" 00:02:36.546 Message: lib/efd: Defining dependency "efd" 00:02:36.546 Message: lib/eventdev: Defining dependency "eventdev" 00:02:36.546 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:36.546 Message: lib/gpudev: Defining dependency "gpudev" 00:02:36.546 Message: lib/gro: Defining dependency "gro" 00:02:36.546 Message: lib/gso: Defining dependency "gso" 00:02:36.546 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:36.546 Message: lib/jobstats: Defining dependency "jobstats" 00:02:36.546 Message: lib/latencystats: Defining dependency "latencystats" 00:02:36.546 Message: lib/lpm: Defining dependency "lpm" 00:02:36.546 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.546 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.546 Fetching value of define "__AVX512IFMA__" : 1 00:02:36.546 Message: lib/member: Defining dependency "member" 00:02:36.546 Message: lib/pcapng: Defining dependency "pcapng" 00:02:36.546 Message: lib/power: Defining dependency "power" 00:02:36.546 Message: lib/rawdev: Defining dependency "rawdev" 00:02:36.546 Message: lib/regexdev: Defining dependency "regexdev" 00:02:36.546 Message: lib/mldev: Defining dependency "mldev" 00:02:36.546 Message: lib/rib: Defining dependency "rib" 00:02:36.546 Message: lib/reorder: Defining dependency "reorder" 00:02:36.546 Message: lib/sched: Defining dependency "sched" 00:02:36.546 Message: lib/security: Defining dependency "security" 00:02:36.546 Message: lib/stack: Defining dependency "stack" 00:02:36.547 Has header "linux/userfaultfd.h" : YES 00:02:36.547 Message: lib/vhost: Defining dependency "vhost" 00:02:36.547 Message: lib/ipsec: Defining dependency "ipsec" 00:02:36.547 Message: lib/pdcp: Defining dependency "pdcp" 00:02:36.547 Message: lib/fib: Defining dependency "fib" 00:02:36.547 Message: lib/port: Defining dependency "port" 00:02:36.547 Message: lib/pdump: Defining dependency "pdump" 00:02:36.547 Message: lib/table: Defining dependency "table" 00:02:36.547 Message: lib/pipeline: Defining dependency "pipeline" 00:02:36.547 Message: lib/graph: Defining dependency "graph" 00:02:36.547 Message: lib/node: Defining dependency "node" 00:02:36.547 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:36.547 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:36.547 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:36.547 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:36.547 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:36.547 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:36.547 Compiler for C supports arguments -Wno-unused-value: YES 00:02:36.547 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:36.547 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:36.547 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:36.547 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:36.547 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:36.547 Message: drivers/power/acpi: Defining dependency "power_acpi" 00:02:36.547 Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate" 00:02:36.547 Message: drivers/power/cppc: Defining dependency "power_cppc" 00:02:36.547 Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate" 00:02:36.547 Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore" 00:02:36.547 Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm" 00:02:36.547 Has header "sys/epoll.h" : YES 00:02:36.547 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:36.547 Configuring doxy-api-html.conf using configuration 00:02:36.547 Configuring doxy-api-man.conf using configuration 00:02:36.547 Program mandb found: YES (/usr/bin/mandb) 00:02:36.547 Program sphinx-build found: NO 00:02:36.547 Program sphinx-build found: NO 00:02:36.547 Configuring rte_build_config.h using configuration 00:02:36.547 Message: 00:02:36.547 ================= 00:02:36.547 Applications Enabled 00:02:36.547 ================= 00:02:36.547 00:02:36.547 apps: 00:02:36.547 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:36.547 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:36.547 test-pmd, test-regex, test-sad, test-security-perf, 00:02:36.547 00:02:36.547 Message: 00:02:36.547 ================= 00:02:36.547 Libraries Enabled 00:02:36.547 ================= 00:02:36.547 00:02:36.547 libs: 00:02:36.547 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:02:36.547 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:02:36.547 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:02:36.547 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:02:36.547 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:02:36.547 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:02:36.547 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:02:36.547 graph, node, 00:02:36.547 00:02:36.547 Message: 00:02:36.547 =============== 00:02:36.547 Drivers Enabled 00:02:36.547 =============== 00:02:36.547 00:02:36.547 common: 00:02:36.547 00:02:36.547 bus: 00:02:36.547 pci, vdev, 00:02:36.547 mempool: 00:02:36.547 ring, 00:02:36.547 dma: 00:02:36.547 00:02:36.547 net: 00:02:36.547 i40e, 00:02:36.547 raw: 00:02:36.547 00:02:36.547 crypto: 00:02:36.547 00:02:36.547 compress: 00:02:36.547 00:02:36.547 regex: 00:02:36.547 00:02:36.547 ml: 00:02:36.547 00:02:36.547 vdpa: 00:02:36.547 00:02:36.547 event: 00:02:36.547 00:02:36.547 baseband: 00:02:36.547 00:02:36.547 gpu: 00:02:36.547 00:02:36.547 power: 00:02:36.547 acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 00:02:36.547 00:02:36.547 Message: 00:02:36.547 ================= 00:02:36.547 Content Skipped 00:02:36.547 ================= 00:02:36.547 00:02:36.547 apps: 00:02:36.547 00:02:36.547 libs: 00:02:36.547 00:02:36.547 drivers: 00:02:36.547 common/cpt: not in enabled drivers build config 00:02:36.547 common/dpaax: not in enabled drivers build config 00:02:36.547 common/iavf: not in enabled drivers build config 00:02:36.547 common/idpf: not in enabled drivers build config 00:02:36.547 common/ionic: not in enabled drivers build config 00:02:36.547 common/mvep: not in enabled drivers build config 00:02:36.547 common/octeontx: not in enabled drivers build config 00:02:36.547 bus/auxiliary: not in enabled drivers build config 00:02:36.547 bus/cdx: not in enabled drivers build config 00:02:36.547 bus/dpaa: not in enabled drivers build config 00:02:36.547 bus/fslmc: not in enabled drivers build config 00:02:36.547 bus/ifpga: not in enabled drivers build config 00:02:36.547 bus/platform: not in enabled drivers build config 00:02:36.547 bus/uacce: not in enabled drivers build config 00:02:36.547 bus/vmbus: not in enabled drivers build config 00:02:36.547 common/cnxk: not in enabled drivers build config 00:02:36.547 common/mlx5: not in enabled drivers build config 00:02:36.547 common/nfp: not in enabled drivers build config 00:02:36.547 common/nitrox: not in enabled drivers build config 00:02:36.547 common/qat: not in enabled drivers build config 00:02:36.547 common/sfc_efx: not in enabled drivers build config 00:02:36.547 mempool/bucket: not in enabled drivers build config 00:02:36.547 mempool/cnxk: not in enabled drivers build config 00:02:36.547 mempool/dpaa: not in enabled drivers build config 00:02:36.547 mempool/dpaa2: not in enabled drivers build config 00:02:36.547 mempool/octeontx: not in enabled drivers build config 00:02:36.547 mempool/stack: not in enabled drivers build config 00:02:36.547 dma/cnxk: not in enabled drivers build config 00:02:36.547 dma/dpaa: not in enabled drivers build config 00:02:36.547 dma/dpaa2: not in enabled drivers build config 00:02:36.547 dma/hisilicon: not in enabled drivers build config 00:02:36.547 dma/idxd: not in enabled drivers build config 00:02:36.547 dma/ioat: not in enabled drivers build config 00:02:36.547 dma/odm: not in enabled drivers build config 00:02:36.547 dma/skeleton: not in enabled drivers build config 00:02:36.547 net/af_packet: not in enabled drivers build config 00:02:36.547 net/af_xdp: not in enabled drivers build config 00:02:36.547 net/ark: not in enabled drivers build config 00:02:36.547 net/atlantic: not in enabled drivers build config 00:02:36.547 net/avp: not in enabled drivers build config 00:02:36.547 net/axgbe: not in enabled drivers build config 00:02:36.547 net/bnx2x: not in enabled drivers build config 00:02:36.547 net/bnxt: not in enabled drivers build config 00:02:36.547 net/bonding: not in enabled drivers build config 00:02:36.547 net/cnxk: not in enabled drivers build config 00:02:36.547 net/cpfl: not in enabled drivers build config 00:02:36.547 net/cxgbe: not in enabled drivers build config 00:02:36.547 net/dpaa: not in enabled drivers build config 00:02:36.547 net/dpaa2: not in enabled drivers build config 00:02:36.547 net/e1000: not in enabled drivers build config 00:02:36.547 net/ena: not in enabled drivers build config 00:02:36.547 net/enetc: not in enabled drivers build config 00:02:36.547 net/enetfec: not in enabled drivers build config 00:02:36.547 net/enic: not in enabled drivers build config 00:02:36.547 net/failsafe: not in enabled drivers build config 00:02:36.547 net/fm10k: not in enabled drivers build config 00:02:36.547 net/gve: not in enabled drivers build config 00:02:36.547 net/hinic: not in enabled drivers build config 00:02:36.547 net/hns3: not in enabled drivers build config 00:02:36.547 net/iavf: not in enabled drivers build config 00:02:36.547 net/ice: not in enabled drivers build config 00:02:36.547 net/idpf: not in enabled drivers build config 00:02:36.547 net/igc: not in enabled drivers build config 00:02:36.547 net/ionic: not in enabled drivers build config 00:02:36.547 net/ipn3ke: not in enabled drivers build config 00:02:36.547 net/ixgbe: not in enabled drivers build config 00:02:36.547 net/mana: not in enabled drivers build config 00:02:36.547 net/memif: not in enabled drivers build config 00:02:36.547 net/mlx4: not in enabled drivers build config 00:02:36.547 net/mlx5: not in enabled drivers build config 00:02:36.548 net/mvneta: not in enabled drivers build config 00:02:36.548 net/mvpp2: not in enabled drivers build config 00:02:36.548 net/netvsc: not in enabled drivers build config 00:02:36.548 net/nfb: not in enabled drivers build config 00:02:36.548 net/nfp: not in enabled drivers build config 00:02:36.548 net/ngbe: not in enabled drivers build config 00:02:36.548 net/ntnic: not in enabled drivers build config 00:02:36.548 net/null: not in enabled drivers build config 00:02:36.548 net/octeontx: not in enabled drivers build config 00:02:36.548 net/octeon_ep: not in enabled drivers build config 00:02:36.548 net/pcap: not in enabled drivers build config 00:02:36.548 net/pfe: not in enabled drivers build config 00:02:36.548 net/qede: not in enabled drivers build config 00:02:36.548 net/r8169: not in enabled drivers build config 00:02:36.548 net/ring: not in enabled drivers build config 00:02:36.548 net/sfc: not in enabled drivers build config 00:02:36.548 net/softnic: not in enabled drivers build config 00:02:36.548 net/tap: not in enabled drivers build config 00:02:36.548 net/thunderx: not in enabled drivers build config 00:02:36.548 net/txgbe: not in enabled drivers build config 00:02:36.548 net/vdev_netvsc: not in enabled drivers build config 00:02:36.548 net/vhost: not in enabled drivers build config 00:02:36.548 net/virtio: not in enabled drivers build config 00:02:36.548 net/vmxnet3: not in enabled drivers build config 00:02:36.548 net/zxdh: not in enabled drivers build config 00:02:36.548 raw/cnxk_bphy: not in enabled drivers build config 00:02:36.548 raw/cnxk_gpio: not in enabled drivers build config 00:02:36.548 raw/cnxk_rvu_lf: not in enabled drivers build config 00:02:36.548 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:36.548 raw/gdtc: not in enabled drivers build config 00:02:36.548 raw/ifpga: not in enabled drivers build config 00:02:36.548 raw/ntb: not in enabled drivers build config 00:02:36.548 raw/skeleton: not in enabled drivers build config 00:02:36.548 crypto/armv8: not in enabled drivers build config 00:02:36.548 crypto/bcmfs: not in enabled drivers build config 00:02:36.548 crypto/caam_jr: not in enabled drivers build config 00:02:36.548 crypto/ccp: not in enabled drivers build config 00:02:36.548 crypto/cnxk: not in enabled drivers build config 00:02:36.548 crypto/dpaa_sec: not in enabled drivers build config 00:02:36.548 crypto/dpaa2_sec: not in enabled drivers build config 00:02:36.548 crypto/ionic: not in enabled drivers build config 00:02:36.548 crypto/ipsec_mb: not in enabled drivers build config 00:02:36.548 crypto/mlx5: not in enabled drivers build config 00:02:36.548 crypto/mvsam: not in enabled drivers build config 00:02:36.548 crypto/nitrox: not in enabled drivers build config 00:02:36.548 crypto/null: not in enabled drivers build config 00:02:36.548 crypto/octeontx: not in enabled drivers build config 00:02:36.548 crypto/openssl: not in enabled drivers build config 00:02:36.548 crypto/scheduler: not in enabled drivers build config 00:02:36.548 crypto/uadk: not in enabled drivers build config 00:02:36.548 crypto/virtio: not in enabled drivers build config 00:02:36.548 compress/isal: not in enabled drivers build config 00:02:36.548 compress/mlx5: not in enabled drivers build config 00:02:36.548 compress/nitrox: not in enabled drivers build config 00:02:36.548 compress/octeontx: not in enabled drivers build config 00:02:36.548 compress/uadk: not in enabled drivers build config 00:02:36.548 compress/zlib: not in enabled drivers build config 00:02:36.548 regex/mlx5: not in enabled drivers build config 00:02:36.548 regex/cn9k: not in enabled drivers build config 00:02:36.548 ml/cnxk: not in enabled drivers build config 00:02:36.548 vdpa/ifc: not in enabled drivers build config 00:02:36.548 vdpa/mlx5: not in enabled drivers build config 00:02:36.548 vdpa/nfp: not in enabled drivers build config 00:02:36.548 vdpa/sfc: not in enabled drivers build config 00:02:36.548 event/cnxk: not in enabled drivers build config 00:02:36.548 event/dlb2: not in enabled drivers build config 00:02:36.548 event/dpaa: not in enabled drivers build config 00:02:36.548 event/dpaa2: not in enabled drivers build config 00:02:36.548 event/dsw: not in enabled drivers build config 00:02:36.548 event/opdl: not in enabled drivers build config 00:02:36.548 event/skeleton: not in enabled drivers build config 00:02:36.548 event/sw: not in enabled drivers build config 00:02:36.548 event/octeontx: not in enabled drivers build config 00:02:36.548 baseband/acc: not in enabled drivers build config 00:02:36.548 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:36.548 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:36.548 baseband/la12xx: not in enabled drivers build config 00:02:36.548 baseband/null: not in enabled drivers build config 00:02:36.548 baseband/turbo_sw: not in enabled drivers build config 00:02:36.548 gpu/cuda: not in enabled drivers build config 00:02:36.548 power/amd_uncore: not in enabled drivers build config 00:02:36.548 00:02:36.548 00:02:36.548 Message: DPDK build config complete: 00:02:36.548 source path = "/home/vagrant/spdk_repo/dpdk" 00:02:36.548 build path = "/home/vagrant/spdk_repo/dpdk/build-tmp" 00:02:36.548 Build targets in project: 244 00:02:36.548 00:02:36.548 DPDK 24.11.0-rc3 00:02:36.548 00:02:36.548 User defined options 00:02:36.548 libdir : lib 00:02:36.548 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:36.548 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:36.548 c_link_args : 00:02:36.548 enable_docs : false 00:02:36.548 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:36.548 enable_kmods : false 00:02:36.809 machine : native 00:02:36.809 tests : false 00:02:36.809 00:02:36.809 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.809 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:36.809 00:47:59 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:37.071 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:37.071 [1/764] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:02:37.071 [2/764] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:02:37.071 [3/764] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:02:37.071 [4/764] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:02:37.071 [5/764] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.071 [6/764] Linking static target lib/librte_kvargs.a 00:02:37.071 [7/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.071 [8/764] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.332 [9/764] Linking static target lib/librte_log.a 00:02:37.332 [10/764] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:37.332 [11/764] Linking static target lib/librte_argparse.a 00:02:37.332 [12/764] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.332 [13/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.332 [14/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.332 [15/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.332 [16/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.332 [17/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.741 [18/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:37.741 [19/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.741 [20/764] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.741 [21/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:37.741 [22/764] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.741 [23/764] Linking target lib/librte_log.so.25.0 00:02:37.741 [24/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.009 [25/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:02:38.009 [26/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.009 [27/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:38.009 [28/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.009 [29/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.009 [30/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.009 [31/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.009 [32/764] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:38.009 [33/764] Linking static target lib/librte_telemetry.a 00:02:38.270 [34/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:38.270 [35/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.270 [36/764] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:02:38.270 [37/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.270 [38/764] Linking target lib/librte_kvargs.so.25.0 00:02:38.270 [39/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.270 [40/764] Linking target lib/librte_argparse.so.25.0 00:02:38.270 [41/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.532 [42/764] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:02:38.532 [43/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.532 [44/764] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.532 [45/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.532 [46/764] Linking target lib/librte_telemetry.so.25.0 00:02:38.532 [47/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.532 [48/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:38.532 [49/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.532 [50/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.532 [51/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.532 [52/764] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:02:38.793 [53/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:02:38.793 [54/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.793 [55/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.793 [56/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:39.055 [57/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:39.056 [58/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:39.056 [59/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:39.056 [60/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:39.056 [61/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:39.056 [62/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:39.056 [63/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.317 [64/764] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.317 [65/764] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:39.317 [66/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.317 [67/764] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:39.317 [68/764] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.317 [69/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.317 [70/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.317 [71/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:39.579 [72/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.579 [73/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.579 [74/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.579 [75/764] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:39.579 [76/764] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.840 [77/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:39.840 [78/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:39.840 [79/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:39.840 [80/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:39.840 [81/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:39.840 [82/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:39.840 [83/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:39.840 [84/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.099 [85/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.099 [86/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.099 [87/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.099 [88/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.099 [89/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.099 [90/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:02:40.357 [91/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.357 [92/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:40.357 [93/764] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.357 [94/764] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.357 [95/764] Linking static target lib/librte_ring.a 00:02:40.357 [96/764] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.615 [97/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.615 [98/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.615 [99/764] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:40.615 [100/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:40.615 [101/764] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:40.615 [102/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.615 [103/764] Linking static target lib/librte_eal.a 00:02:40.873 [104/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.873 [105/764] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.873 [106/764] Linking static target lib/librte_rcu.a 00:02:40.873 [107/764] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.873 [108/764] Linking static target lib/librte_mempool.a 00:02:40.873 [109/764] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.873 [110/764] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:40.873 [111/764] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.132 [112/764] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:41.132 [113/764] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.132 [114/764] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.132 [115/764] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:41.132 [116/764] Linking static target lib/librte_net.a 00:02:41.132 [117/764] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.132 [118/764] Linking static target lib/librte_meter.a 00:02:41.391 [119/764] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.391 [120/764] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:41.391 [121/764] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.391 [122/764] Linking static target lib/librte_mbuf.a 00:02:41.391 [123/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.391 [124/764] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.391 [125/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:41.391 [126/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:41.650 [127/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.908 [128/764] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.908 [129/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:41.908 [130/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:42.166 [131/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.166 [132/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:42.166 [133/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.166 [134/764] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:42.166 [135/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.166 [136/764] Linking static target lib/librte_pci.a 00:02:42.166 [137/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.423 [138/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.423 [139/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.423 [140/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.423 [141/764] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.423 [142/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.423 [143/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.423 [144/764] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:42.423 [145/764] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.423 [146/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.423 [147/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.423 [148/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.423 [149/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.679 [150/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.679 [151/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.679 [152/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.679 [153/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.679 [154/764] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.679 [155/764] Linking static target lib/librte_cmdline.a 00:02:42.979 [156/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:42.979 [157/764] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:42.979 [158/764] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:42.979 [159/764] Linking static target lib/librte_metrics.a 00:02:42.979 [160/764] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.979 [161/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:43.261 [162/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:43.261 [163/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o 00:02:43.261 [164/764] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.261 [165/764] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.519 [166/764] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.519 [167/764] Linking static target lib/librte_timer.a 00:02:43.519 [168/764] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.519 [169/764] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.777 [170/764] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:43.777 [171/764] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:43.777 [172/764] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:43.777 [173/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:44.035 [174/764] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:44.035 [175/764] Linking static target lib/librte_bitratestats.a 00:02:44.035 [176/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:44.292 [177/764] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.550 [178/764] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:44.550 [179/764] Linking static target lib/librte_bbdev.a 00:02:44.550 [180/764] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:44.550 [181/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:44.550 [182/764] Linking static target lib/librte_ethdev.a 00:02:44.550 [183/764] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:44.808 [184/764] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:44.808 [185/764] Linking static target lib/acl/libavx2_tmp.a 00:02:44.808 [186/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:44.808 [187/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:44.808 [188/764] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.808 [189/764] Linking static target lib/librte_hash.a 00:02:44.808 [190/764] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.808 [191/764] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.808 [192/764] Linking target lib/librte_eal.so.25.0 00:02:45.065 [193/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:45.065 [194/764] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:02:45.065 [195/764] Linking target lib/librte_ring.so.25.0 00:02:45.065 [196/764] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:02:45.065 [197/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:45.065 [198/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:45.065 [199/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:45.065 [200/764] Linking target lib/librte_rcu.so.25.0 00:02:45.065 [201/764] Linking target lib/librte_mempool.so.25.0 00:02:45.065 [202/764] Linking target lib/librte_meter.so.25.0 00:02:45.065 [203/764] Linking target lib/librte_pci.so.25.0 00:02:45.322 [204/764] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:02:45.322 [205/764] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.322 [206/764] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:02:45.322 [207/764] Linking target lib/librte_timer.so.25.0 00:02:45.322 [208/764] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:02:45.322 [209/764] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:02:45.322 [210/764] Linking target lib/librte_mbuf.so.25.0 00:02:45.322 [211/764] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:02:45.322 [212/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:45.323 [213/764] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:02:45.323 [214/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:45.323 [215/764] Linking target lib/librte_net.so.25.0 00:02:45.323 [216/764] Linking target lib/librte_bbdev.so.25.0 00:02:45.581 [217/764] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:45.581 [218/764] Linking static target lib/librte_bpf.a 00:02:45.581 [219/764] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:02:45.581 [220/764] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:45.581 [221/764] Linking target lib/librte_hash.so.25.0 00:02:45.581 [222/764] Linking target lib/librte_cmdline.so.25.0 00:02:45.581 [223/764] Linking static target lib/librte_cfgfile.a 00:02:45.581 [224/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:45.581 [225/764] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:45.581 [226/764] Linking static target lib/librte_acl.a 00:02:45.581 [227/764] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:02:45.581 [228/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:45.581 [229/764] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.581 [230/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:45.839 [231/764] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.839 [232/764] Linking target lib/librte_cfgfile.so.25.0 00:02:45.839 [233/764] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:45.839 [234/764] Linking static target lib/librte_compressdev.a 00:02:45.839 [235/764] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.839 [236/764] Linking target lib/librte_acl.so.25.0 00:02:46.097 [237/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:46.097 [238/764] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:02:46.097 [239/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:46.097 [240/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:46.097 [241/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:46.097 [242/764] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:46.097 [243/764] Linking static target lib/librte_distributor.a 00:02:46.097 [244/764] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.355 [245/764] Linking target lib/librte_compressdev.so.25.0 00:02:46.355 [246/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:46.355 [247/764] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.355 [248/764] Linking target lib/librte_distributor.so.25.0 00:02:46.355 [249/764] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:46.355 [250/764] Linking static target lib/librte_dmadev.a 00:02:46.612 [251/764] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:46.612 [252/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:46.870 [253/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:46.870 [254/764] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.870 [255/764] Linking target lib/librte_dmadev.so.25.0 00:02:46.870 [256/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:46.870 [257/764] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:02:47.129 [258/764] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:47.129 [259/764] Linking static target lib/librte_efd.a 00:02:47.129 [260/764] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:47.129 [261/764] Linking static target lib/librte_cryptodev.a 00:02:47.129 [262/764] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:47.129 [263/764] Linking static target lib/librte_dispatcher.a 00:02:47.129 [264/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:47.386 [265/764] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.386 [266/764] Linking target lib/librte_efd.so.25.0 00:02:47.386 [267/764] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:47.386 [268/764] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.644 [269/764] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:47.644 [270/764] Linking static target lib/librte_gpudev.a 00:02:47.644 [271/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:47.644 [272/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:47.644 [273/764] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:47.902 [274/764] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:47.902 [275/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:47.902 [276/764] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:47.902 [277/764] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:48.160 [278/764] Linking static target lib/librte_gro.a 00:02:48.160 [279/764] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:48.160 [280/764] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.160 [281/764] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:48.160 [282/764] Linking target lib/librte_cryptodev.so.25.0 00:02:48.160 [283/764] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.160 [284/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:48.160 [285/764] Linking target lib/librte_gpudev.so.25.0 00:02:48.160 [286/764] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.160 [287/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:48.160 [288/764] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:02:48.160 [289/764] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:48.160 [290/764] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.418 [291/764] Linking target lib/librte_ethdev.so.25.0 00:02:48.418 [292/764] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:48.418 [293/764] Linking static target lib/librte_gso.a 00:02:48.418 [294/764] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:02:48.418 [295/764] Linking target lib/librte_metrics.so.25.0 00:02:48.418 [296/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:48.418 [297/764] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:48.418 [298/764] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.418 [299/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:48.418 [300/764] Linking target lib/librte_bpf.so.25.0 00:02:48.418 [301/764] Linking static target lib/librte_jobstats.a 00:02:48.418 [302/764] Linking target lib/librte_gro.so.25.0 00:02:48.418 [303/764] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:48.676 [304/764] Linking target lib/librte_gso.so.25.0 00:02:48.676 [305/764] Linking static target lib/librte_eventdev.a 00:02:48.676 [306/764] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:02:48.676 [307/764] Linking target lib/librte_bitratestats.so.25.0 00:02:48.676 [308/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:48.677 [309/764] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:02:48.677 [310/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:48.677 [311/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:48.677 [312/764] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.677 [313/764] Linking target lib/librte_jobstats.so.25.0 00:02:48.677 [314/764] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:48.677 [315/764] Linking static target lib/librte_ip_frag.a 00:02:48.935 [316/764] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:48.935 [317/764] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:48.935 [318/764] Linking static target lib/librte_latencystats.a 00:02:48.935 [319/764] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.935 [320/764] Linking target lib/librte_ip_frag.so.25.0 00:02:48.935 [321/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:48.935 [322/764] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:49.194 [323/764] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:02:49.194 [324/764] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.194 [325/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:49.194 [326/764] Linking target lib/librte_latencystats.so.25.0 00:02:49.194 [327/764] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o 00:02:49.194 [328/764] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:49.452 [329/764] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:49.452 [330/764] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:49.452 [331/764] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o 00:02:49.452 [332/764] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:49.452 [333/764] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:49.452 [334/764] Linking static target lib/librte_pcapng.a 00:02:49.452 [335/764] Linking static target lib/librte_lpm.a 00:02:49.452 [336/764] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:49.452 [337/764] Linking static target lib/librte_power.a 00:02:49.452 [338/764] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:49.765 [339/764] Linking static target lib/librte_rawdev.a 00:02:49.765 [340/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:49.765 [341/764] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.765 [342/764] Linking target lib/librte_pcapng.so.25.0 00:02:49.765 [343/764] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:49.765 [344/764] Linking static target lib/librte_regexdev.a 00:02:49.765 [345/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:49.765 [346/764] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.765 [347/764] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:49.765 [348/764] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:02:49.765 [349/764] Linking target lib/librte_lpm.so.25.0 00:02:49.765 [350/764] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:02:50.024 [351/764] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.024 [352/764] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:50.024 [353/764] Linking static target lib/librte_member.a 00:02:50.024 [354/764] Linking target lib/librte_rawdev.so.25.0 00:02:50.024 [355/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:50.024 [356/764] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:50.024 [357/764] Linking static target lib/librte_mldev.a 00:02:50.024 [358/764] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.024 [359/764] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.024 [360/764] Linking target lib/librte_eventdev.so.25.0 00:02:50.024 [361/764] Linking target lib/librte_power.so.25.0 00:02:50.282 [362/764] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:02:50.282 [363/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:50.282 [364/764] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols 00:02:50.282 [365/764] Linking target lib/librte_dispatcher.so.25.0 00:02:50.282 [366/764] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:50.282 [367/764] Linking static target lib/librte_rib.a 00:02:50.282 [368/764] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.282 [369/764] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.282 [370/764] Linking target lib/librte_member.so.25.0 00:02:50.282 [371/764] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:50.282 [372/764] Linking target lib/librte_regexdev.so.25.0 00:02:50.282 [373/764] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:50.282 [374/764] Linking static target lib/librte_reorder.a 00:02:50.282 [375/764] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:50.541 [376/764] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:50.541 [377/764] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.541 [378/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:50.541 [379/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:50.541 [380/764] Linking target lib/librte_reorder.so.25.0 00:02:50.541 [381/764] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.541 [382/764] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:50.541 [383/764] Linking static target lib/librte_stack.a 00:02:50.541 [384/764] Linking target lib/librte_rib.so.25.0 00:02:50.541 [385/764] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:02:50.799 [386/764] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:02:50.799 [387/764] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.799 [388/764] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:50.799 [389/764] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:50.799 [390/764] Linking static target lib/librte_security.a 00:02:50.799 [391/764] Linking target lib/librte_stack.so.25.0 00:02:51.058 [392/764] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:51.058 [393/764] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:51.058 [394/764] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.058 [395/764] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.058 [396/764] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:51.058 [397/764] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:51.058 [398/764] Linking target lib/librte_mldev.so.25.0 00:02:51.058 [399/764] Linking target lib/librte_security.so.25.0 00:02:51.317 [400/764] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:02:51.317 [401/764] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:51.317 [402/764] Linking static target lib/librte_sched.a 00:02:51.576 [403/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:51.576 [404/764] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:51.576 [405/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:51.576 [406/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:51.576 [407/764] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.835 [408/764] Linking target lib/librte_sched.so.25.0 00:02:51.835 [409/764] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:02:51.835 [410/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:52.093 [411/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:52.093 [412/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:52.093 [413/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:52.093 [414/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:52.351 [415/764] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:52.351 [416/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:52.351 [417/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:52.351 [418/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:52.351 [419/764] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:52.351 [420/764] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:02:52.351 [421/764] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:52.351 [422/764] Linking static target lib/librte_ipsec.a 00:02:52.609 [423/764] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:52.609 [424/764] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.609 [425/764] Linking target lib/librte_ipsec.so.25.0 00:02:52.867 [426/764] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:52.867 [427/764] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:02:52.867 [428/764] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:52.867 [429/764] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:53.125 [430/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:53.125 [431/764] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:53.125 [432/764] Linking static target lib/librte_pdcp.a 00:02:53.125 [433/764] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:53.383 [434/764] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:53.383 [435/764] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:53.383 [436/764] Linking static target lib/librte_fib.a 00:02:53.383 [437/764] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:53.383 [438/764] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:53.383 [439/764] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.383 [440/764] Linking target lib/librte_pdcp.so.25.0 00:02:53.640 [441/764] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.640 [442/764] Linking target lib/librte_fib.so.25.0 00:02:53.640 [443/764] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:53.898 [444/764] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:53.898 [445/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:53.898 [446/764] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:54.155 [447/764] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:54.155 [448/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:54.155 [449/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:54.414 [450/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:54.414 [451/764] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:54.414 [452/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:54.414 [453/764] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:54.414 [454/764] Linking static target lib/librte_port.a 00:02:54.414 [455/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:54.414 [456/764] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:54.414 [457/764] Linking static target lib/librte_pdump.a 00:02:54.414 [458/764] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:54.672 [459/764] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:54.672 [460/764] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:54.672 [461/764] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.672 [462/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:54.672 [463/764] Linking target lib/librte_pdump.so.25.0 00:02:54.930 [464/764] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.930 [465/764] Linking target lib/librte_port.so.25.0 00:02:54.930 [466/764] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:02:54.930 [467/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:54.930 [468/764] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:54.930 [469/764] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:02:54.930 [470/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:55.190 [471/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:55.190 [472/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:55.190 [473/764] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:55.449 [474/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:55.449 [475/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:55.449 [476/764] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:55.449 [477/764] Linking static target lib/librte_table.a 00:02:55.707 [478/764] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:55.707 [479/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:55.707 [480/764] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:55.965 [481/764] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:55.965 [482/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:55.965 [483/764] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:55.965 [484/764] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:55.965 [485/764] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.223 [486/764] Linking target lib/librte_table.so.25.0 00:02:56.223 [487/764] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:02:56.223 [488/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:56.223 [489/764] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:56.223 [490/764] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:56.480 [491/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:56.480 [492/764] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:56.736 [493/764] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:56.736 [494/764] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:56.736 [495/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:56.736 [496/764] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:56.736 [497/764] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:56.736 [498/764] Linking static target lib/librte_graph.a 00:02:56.994 [499/764] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:56.994 [500/764] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:56.994 [501/764] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.251 [502/764] Linking target lib/librte_graph.so.25.0 00:02:57.252 [503/764] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:02:57.252 [504/764] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:57.252 [505/764] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:57.252 [506/764] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:57.509 [507/764] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:57.509 [508/764] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:57.509 [509/764] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:57.509 [510/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:57.509 [511/764] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:57.509 [512/764] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:57.509 [513/764] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:57.766 [514/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.766 [515/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:57.766 [516/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:57.766 [517/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.766 [518/764] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:58.025 [519/764] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:58.025 [520/764] Linking static target lib/librte_node.a 00:02:58.025 [521/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:58.025 [522/764] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:58.283 [523/764] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:58.283 [524/764] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.283 [525/764] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:58.283 [526/764] Linking target lib/librte_node.so.25.0 00:02:58.283 [527/764] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:58.283 [528/764] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:58.283 [529/764] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.283 [530/764] Linking static target drivers/librte_bus_vdev.a 00:02:58.541 [531/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:58.541 [532/764] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:58.541 [533/764] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:58.541 [534/764] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.541 [535/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:58.541 [536/764] Linking static target drivers/librte_bus_pci.a 00:02:58.541 [537/764] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:58.541 [538/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:58.541 [539/764] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.541 [540/764] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:58.541 [541/764] Linking target drivers/librte_bus_vdev.so.25.0 00:02:58.541 [542/764] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:58.798 [543/764] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:58.798 [544/764] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:02:58.798 [545/764] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.798 [546/764] Linking static target drivers/librte_mempool_ring.a 00:02:58.798 [547/764] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.798 [548/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:58.798 [549/764] Linking target drivers/librte_mempool_ring.so.25.0 00:02:58.798 [550/764] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.798 [551/764] Linking target drivers/librte_bus_pci.so.25.0 00:02:59.056 [552/764] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:02:59.056 [553/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:59.313 [554/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:59.313 [555/764] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:59.313 [556/764] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:59.878 [557/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:00.137 [558/764] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:00.137 [559/764] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:00.137 [560/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:00.137 [561/764] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:00.137 [562/764] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:00.137 [563/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:00.396 [564/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:00.396 [565/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:00.654 [566/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:00.654 [567/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:00.654 [568/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:00.912 [569/764] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o 00:03:00.912 [570/764] Linking static target drivers/libtmp_rte_power_acpi.a 00:03:00.912 [571/764] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o 00:03:00.912 [572/764] Linking static target drivers/libtmp_rte_power_amd_pstate.a 00:03:00.912 [573/764] Generating drivers/rte_power_acpi.pmd.c with a custom command 00:03:00.912 [574/764] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:00.912 [575/764] Linking static target drivers/librte_power_acpi.a 00:03:00.912 [576/764] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o 00:03:01.172 [577/764] Linking target drivers/librte_power_acpi.so.25.0 00:03:01.172 [578/764] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command 00:03:01.172 [579/764] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o 00:03:01.172 [580/764] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:01.172 [581/764] Linking static target drivers/librte_power_amd_pstate.a 00:03:01.172 [582/764] Linking static target drivers/libtmp_rte_power_cppc.a 00:03:01.172 [583/764] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o 00:03:01.172 [584/764] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o 00:03:01.172 [585/764] Linking target drivers/librte_power_amd_pstate.so.25.0 00:03:01.172 [586/764] Linking static target drivers/libtmp_rte_power_intel_pstate.a 00:03:01.172 [587/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o 00:03:01.172 [588/764] Generating drivers/rte_power_cppc.pmd.c with a custom command 00:03:01.172 [589/764] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:01.172 [590/764] Linking static target drivers/librte_power_cppc.a 00:03:01.172 [591/764] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o 00:03:01.172 [592/764] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o 00:03:01.172 [593/764] Linking static target drivers/libtmp_rte_power_kvm_vm.a 00:03:01.172 [594/764] Linking target drivers/librte_power_cppc.so.25.0 00:03:01.172 [595/764] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command 00:03:01.430 [596/764] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:01.430 [597/764] Linking static target drivers/librte_power_intel_pstate.a 00:03:01.430 [598/764] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o 00:03:01.430 [599/764] Linking target drivers/librte_power_intel_pstate.so.25.0 00:03:01.430 [600/764] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command 00:03:01.430 [601/764] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:01.430 [602/764] Linking static target drivers/librte_power_kvm_vm.a 00:03:01.430 [603/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:01.430 [604/764] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:01.430 [605/764] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o 00:03:01.430 [606/764] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o 00:03:01.430 [607/764] Linking static target drivers/libtmp_rte_power_intel_uncore.a 00:03:01.430 [608/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:01.430 [609/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:01.687 [610/764] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.687 [611/764] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command 00:03:01.687 [612/764] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:01.687 [613/764] Linking static target drivers/librte_power_intel_uncore.a 00:03:01.687 [614/764] Linking target drivers/librte_power_kvm_vm.so.25.0 00:03:01.687 [615/764] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o 00:03:01.687 [616/764] Linking target drivers/librte_power_intel_uncore.so.25.0 00:03:01.687 [617/764] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:01.687 [618/764] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:01.687 [619/764] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:01.945 [620/764] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:01.945 [621/764] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:01.945 [622/764] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:01.945 [623/764] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:02.233 [624/764] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:02.233 [625/764] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:02.233 [626/764] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:02.233 [627/764] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:02.233 [628/764] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:02.233 [629/764] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:02.233 [630/764] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:02.233 [631/764] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:02.233 [632/764] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.233 [633/764] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:02.490 [634/764] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.490 [635/764] Linking static target drivers/librte_net_i40e.a 00:03:02.490 [636/764] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:02.490 [637/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:02.490 [638/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:02.490 [639/764] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:02.748 [640/764] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:02.748 [641/764] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.006 [642/764] Linking target drivers/librte_net_i40e.so.25.0 00:03:03.006 [643/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:03.006 [644/764] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:03.006 [645/764] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:03.006 [646/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:03.264 [647/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:03.264 [648/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:03.264 [649/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:03.523 [650/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:03.523 [651/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:03.780 [652/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:03.780 [653/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:03.780 [654/764] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:03.780 [655/764] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.780 [656/764] Linking static target lib/librte_vhost.a 00:03:03.780 [657/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:04.038 [658/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:04.038 [659/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:04.038 [660/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:04.038 [661/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:04.039 [662/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:04.296 [663/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:04.296 [664/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:04.296 [665/764] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:04.296 [666/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:04.296 [667/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:04.296 [668/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:04.553 [669/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:04.553 [670/764] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.553 [671/764] Linking target lib/librte_vhost.so.25.0 00:03:04.810 [672/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:04.810 [673/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:04.810 [674/764] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:04.810 [675/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:05.375 [676/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:05.375 [677/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:05.375 [678/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:05.375 [679/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:05.674 [680/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:05.674 [681/764] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:05.674 [682/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:05.674 [683/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:05.674 [684/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:05.674 [685/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:05.674 [686/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:05.674 [687/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:05.932 [688/764] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:05.932 [689/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:05.932 [690/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:05.932 [691/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:06.189 [692/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:06.189 [693/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:06.189 [694/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:06.455 [695/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:06.455 [696/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:06.455 [697/764] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:06.455 [698/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:06.455 [699/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:06.455 [700/764] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:06.455 [701/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:06.455 [702/764] Linking static target lib/librte_pipeline.a 00:03:06.719 [703/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:06.719 [704/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:06.719 [705/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:06.719 [706/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:06.719 [707/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:06.976 [708/764] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:06.976 [709/764] Linking target app/dpdk-dumpcap 00:03:06.976 [710/764] Linking target app/dpdk-pdump 00:03:06.976 [711/764] Linking target app/dpdk-proc-info 00:03:06.976 [712/764] Linking target app/dpdk-graph 00:03:06.976 [713/764] Linking target app/dpdk-test-acl 00:03:07.234 [714/764] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:07.234 [715/764] Linking target app/dpdk-test-cmdline 00:03:07.234 [716/764] Linking target app/dpdk-test-compress-perf 00:03:07.234 [717/764] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:07.492 [718/764] Linking target app/dpdk-test-dma-perf 00:03:07.492 [719/764] Linking target app/dpdk-test-crypto-perf 00:03:07.492 [720/764] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:07.492 [721/764] Linking target app/dpdk-test-eventdev 00:03:07.492 [722/764] Linking target app/dpdk-test-fib 00:03:07.492 [723/764] Linking target app/dpdk-test-flow-perf 00:03:07.492 [724/764] Linking target app/dpdk-test-gpudev 00:03:07.492 [725/764] Linking target app/dpdk-test-pipeline 00:03:07.750 [726/764] Linking target app/dpdk-test-mldev 00:03:07.750 [727/764] Linking target app/dpdk-test-bbdev 00:03:07.750 [728/764] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:07.750 [729/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:08.010 [730/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:08.010 [731/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:08.272 [732/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:08.272 [733/764] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:08.272 [734/764] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o 00:03:08.531 [735/764] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:08.531 [736/764] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:08.531 [737/764] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:08.531 [738/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:08.789 [739/764] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:08.789 [740/764] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.789 [741/764] Linking target lib/librte_pipeline.so.25.0 00:03:08.789 [742/764] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:08.789 [743/764] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:09.048 [744/764] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:09.048 [745/764] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:09.048 [746/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:09.048 [747/764] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:09.307 [748/764] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:09.307 [749/764] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:09.565 [750/764] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:09.565 [751/764] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:09.566 [752/764] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:09.566 [753/764] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:09.825 [754/764] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:09.825 [755/764] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:09.825 [756/764] Linking target app/dpdk-test-sad 00:03:09.825 [757/764] Linking target app/dpdk-test-regex 00:03:09.825 [758/764] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:09.825 [759/764] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:10.084 [760/764] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:10.342 [761/764] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:10.342 [762/764] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:10.342 [763/764] Linking target app/dpdk-test-security-perf 00:03:10.600 [764/764] Linking target app/dpdk-testpmd 00:03:10.600 00:48:33 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:10.600 00:48:33 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:10.600 00:48:33 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:10.600 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:10.861 [0/1] Installing files. 00:03:10.861 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:10.861 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:10.861 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:10.862 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.863 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:10.864 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.125 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.126 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:11.126 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.126 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.127 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.388 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.388 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.388 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.388 Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.388 Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.388 Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.388 Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.388 Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.389 Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.389 Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.389 Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.389 Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:11.389 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.389 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.390 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.391 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.392 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:11.392 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:11.392 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:11.392 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:11.392 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:11.392 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:11.392 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:11.392 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:11.392 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:11.392 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:11.392 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:11.392 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:11.392 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:11.392 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:11.392 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:11.392 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:11.392 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:11.392 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:11.392 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:11.392 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:11.392 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:11.392 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:11.392 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:11.392 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:11.392 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:11.392 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:11.392 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:11.392 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:11.392 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:11.392 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:11.392 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:11.392 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:11.392 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:11.392 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:11.392 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:11.392 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:11.392 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:11.392 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:11.392 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:11.392 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:11.392 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:11.392 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:11.392 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:11.392 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:11.392 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:11.392 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:11.392 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:11.392 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:11.392 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:11.392 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:11.392 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:11.392 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:11.392 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:11.392 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:11.392 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:11.392 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:11.392 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:11.392 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:11.392 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:11.392 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:11.392 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:11.392 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:11.392 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:11.392 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:11.392 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:11.392 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:11.392 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:11.392 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:11.392 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:11.392 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:11.392 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:11.392 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:11.392 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:11.392 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:11.392 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:11.393 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:11.393 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:11.393 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:11.393 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:11.393 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:11.393 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:11.393 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:11.393 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:11.393 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:11.393 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:11.393 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:11.393 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:11.393 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:11.393 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:11.393 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:11.393 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:11.393 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:11.393 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:11.393 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:11.393 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:11.393 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:11.393 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:11.393 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:11.393 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:11.393 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:11.393 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:11.393 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:11.393 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:11.393 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:11.393 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:11.393 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:11.393 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:11.393 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:11.393 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:11.393 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:11.393 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:11.393 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:11.393 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:11.393 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:11.393 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:11.393 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:11.393 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:11.393 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:11.393 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:11.393 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:11.393 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:11.393 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:11.393 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:11.393 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:11.393 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:11.393 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:11.393 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:11.393 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:11.393 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:11.393 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:11.393 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:11.393 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:11.393 './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so' 00:03:11.393 './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25' 00:03:11.393 './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0' 00:03:11.393 './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so' 00:03:11.393 './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25' 00:03:11.393 './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0' 00:03:11.393 './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so' 00:03:11.393 './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25' 00:03:11.393 './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0' 00:03:11.393 './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so' 00:03:11.393 './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25' 00:03:11.393 './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0' 00:03:11.393 './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so' 00:03:11.393 './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25' 00:03:11.393 './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0' 00:03:11.393 './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so' 00:03:11.393 './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25' 00:03:11.393 './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0' 00:03:11.393 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:11.393 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:11.393 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:11.393 Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25 00:03:11.393 Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:03:11.393 Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25 00:03:11.393 Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:03:11.393 Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25 00:03:11.393 Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:03:11.393 Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25 00:03:11.393 Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:03:11.393 Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25 00:03:11.393 Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:03:11.393 Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25 00:03:11.393 Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:03:11.393 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:11.393 00:48:34 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:11.393 00:48:34 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:11.393 00:03:11.394 real 0m41.512s 00:03:11.394 user 4m42.550s 00:03:11.394 sys 0m44.186s 00:03:11.394 00:48:34 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:11.394 00:48:34 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:11.394 ************************************ 00:03:11.394 END TEST build_native_dpdk 00:03:11.394 ************************************ 00:03:11.652 00:48:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:11.652 00:48:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:11.652 00:48:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:11.652 00:48:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:11.652 00:48:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:11.652 00:48:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:11.652 00:48:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:11.652 00:48:34 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:11.652 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:11.652 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:11.652 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:11.652 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:12.219 Using 'verbs' RDMA provider 00:03:23.281 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:33.256 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:33.518 Creating mk/config.mk...done. 00:03:33.518 Creating mk/cc.flags.mk...done. 00:03:33.518 Type 'make' to build. 00:03:33.518 00:48:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:33.518 00:48:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:33.518 00:48:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:33.518 00:48:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:33.518 ************************************ 00:03:33.518 START TEST make 00:03:33.518 ************************************ 00:03:33.518 00:48:56 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:33.777 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:33.777 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:33.777 meson setup builddir \ 00:03:33.777 -Dwith-libaio=enabled \ 00:03:33.777 -Dwith-liburing=enabled \ 00:03:33.777 -Dwith-libvfn=disabled \ 00:03:33.777 -Dwith-spdk=disabled \ 00:03:33.777 -Dexamples=false \ 00:03:33.777 -Dtests=false \ 00:03:33.777 -Dtools=false && \ 00:03:33.777 meson compile -C builddir && \ 00:03:33.777 cd -) 00:03:33.777 make[1]: Nothing to be done for 'all'. 00:03:35.683 The Meson build system 00:03:35.683 Version: 1.5.0 00:03:35.683 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:35.683 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:35.683 Build type: native build 00:03:35.683 Project name: xnvme 00:03:35.683 Project version: 0.7.5 00:03:35.683 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:35.683 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:35.683 Host machine cpu family: x86_64 00:03:35.683 Host machine cpu: x86_64 00:03:35.683 Message: host_machine.system: linux 00:03:35.683 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:35.683 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:35.683 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:35.683 Run-time dependency threads found: YES 00:03:35.683 Has header "setupapi.h" : NO 00:03:35.683 Has header "linux/blkzoned.h" : YES 00:03:35.683 Has header "linux/blkzoned.h" : YES (cached) 00:03:35.683 Has header "libaio.h" : YES 00:03:35.683 Library aio found: YES 00:03:35.683 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:35.683 Run-time dependency liburing found: YES 2.2 00:03:35.683 Dependency libvfn skipped: feature with-libvfn disabled 00:03:35.683 Found CMake: /usr/bin/cmake (3.27.7) 00:03:35.683 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:35.683 Subproject spdk : skipped: feature with-spdk disabled 00:03:35.683 Run-time dependency appleframeworks found: NO (tried framework) 00:03:35.683 Run-time dependency appleframeworks found: NO (tried framework) 00:03:35.683 Library rt found: YES 00:03:35.683 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:35.683 Configuring xnvme_config.h using configuration 00:03:35.683 Configuring xnvme.spec using configuration 00:03:35.683 Run-time dependency bash-completion found: YES 2.11 00:03:35.683 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:35.683 Program cp found: YES (/usr/bin/cp) 00:03:35.683 Build targets in project: 3 00:03:35.683 00:03:35.683 xnvme 0.7.5 00:03:35.683 00:03:35.684 Subprojects 00:03:35.684 spdk : NO Feature 'with-spdk' disabled 00:03:35.684 00:03:35.684 User defined options 00:03:35.684 examples : false 00:03:35.684 tests : false 00:03:35.684 tools : false 00:03:35.684 with-libaio : enabled 00:03:35.684 with-liburing: enabled 00:03:35.684 with-libvfn : disabled 00:03:35.684 with-spdk : disabled 00:03:35.684 00:03:35.684 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:36.253 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:36.253 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:36.253 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:36.253 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:36.253 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:36.253 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:36.253 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:36.253 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:36.253 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:36.254 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:36.254 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:36.254 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:36.254 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:36.254 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:36.254 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:36.254 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:36.254 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:36.254 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:36.254 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:36.254 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:36.254 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:36.254 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:36.254 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:36.254 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:36.525 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:36.525 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:36.525 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:36.525 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:36.525 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:36.525 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:36.525 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:36.525 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:36.525 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:36.525 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:36.525 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:36.525 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:36.525 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:36.525 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:36.525 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:36.525 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:36.525 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:36.525 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:36.525 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:36.525 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:36.525 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:36.525 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:36.525 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:36.525 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:36.526 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:36.526 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:36.526 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:36.526 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:36.526 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:36.526 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:36.526 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:36.526 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:36.526 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:36.526 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:36.526 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:36.526 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:36.526 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:36.785 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:36.785 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:36.785 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:36.785 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:36.785 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:36.785 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:36.785 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:36.785 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:36.785 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:36.785 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:36.785 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:36.785 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:36.785 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:37.354 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:37.354 [75/76] Linking static target lib/libxnvme.a 00:03:37.354 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:37.354 INFO: autodetecting backend as ninja 00:03:37.354 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:37.354 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:16.081 CC lib/ut/ut.o 00:04:16.081 CC lib/log/log.o 00:04:16.081 CC lib/log/log_deprecated.o 00:04:16.081 CC lib/log/log_flags.o 00:04:16.081 CC lib/ut_mock/mock.o 00:04:16.081 LIB libspdk_ut_mock.a 00:04:16.081 LIB libspdk_ut.a 00:04:16.081 SO libspdk_ut_mock.so.6.0 00:04:16.081 LIB libspdk_log.a 00:04:16.081 SO libspdk_ut.so.2.0 00:04:16.081 SO libspdk_log.so.7.1 00:04:16.081 SYMLINK libspdk_ut_mock.so 00:04:16.081 SYMLINK libspdk_ut.so 00:04:16.081 SYMLINK libspdk_log.so 00:04:16.081 CC lib/ioat/ioat.o 00:04:16.081 CC lib/util/base64.o 00:04:16.081 CC lib/util/bit_array.o 00:04:16.081 CC lib/util/crc16.o 00:04:16.081 CC lib/util/crc32c.o 00:04:16.081 CC lib/util/cpuset.o 00:04:16.081 CC lib/util/crc32.o 00:04:16.081 CC lib/dma/dma.o 00:04:16.081 CXX lib/trace_parser/trace.o 00:04:16.081 CC lib/vfio_user/host/vfio_user_pci.o 00:04:16.081 CC lib/util/crc32_ieee.o 00:04:16.081 CC lib/util/crc64.o 00:04:16.081 CC lib/util/dif.o 00:04:16.081 CC lib/util/fd.o 00:04:16.081 LIB libspdk_dma.a 00:04:16.081 CC lib/vfio_user/host/vfio_user.o 00:04:16.081 LIB libspdk_ioat.a 00:04:16.081 SO libspdk_dma.so.5.0 00:04:16.081 CC lib/util/fd_group.o 00:04:16.081 SO libspdk_ioat.so.7.0 00:04:16.081 CC lib/util/file.o 00:04:16.081 SYMLINK libspdk_dma.so 00:04:16.081 CC lib/util/hexlify.o 00:04:16.081 CC lib/util/iov.o 00:04:16.081 SYMLINK libspdk_ioat.so 00:04:16.081 CC lib/util/math.o 00:04:16.081 CC lib/util/net.o 00:04:16.081 CC lib/util/pipe.o 00:04:16.081 LIB libspdk_vfio_user.a 00:04:16.081 CC lib/util/strerror_tls.o 00:04:16.081 CC lib/util/string.o 00:04:16.081 CC lib/util/uuid.o 00:04:16.081 SO libspdk_vfio_user.so.5.0 00:04:16.081 CC lib/util/xor.o 00:04:16.081 CC lib/util/zipf.o 00:04:16.081 SYMLINK libspdk_vfio_user.so 00:04:16.081 CC lib/util/md5.o 00:04:16.081 LIB libspdk_util.a 00:04:16.081 SO libspdk_util.so.10.1 00:04:16.081 LIB libspdk_trace_parser.a 00:04:16.081 SYMLINK libspdk_util.so 00:04:16.081 SO libspdk_trace_parser.so.6.0 00:04:16.081 SYMLINK libspdk_trace_parser.so 00:04:16.081 CC lib/json/json_parse.o 00:04:16.081 CC lib/json/json_util.o 00:04:16.081 CC lib/json/json_write.o 00:04:16.081 CC lib/env_dpdk/env.o 00:04:16.081 CC lib/env_dpdk/pci.o 00:04:16.081 CC lib/env_dpdk/memory.o 00:04:16.081 CC lib/idxd/idxd.o 00:04:16.081 CC lib/rdma_utils/rdma_utils.o 00:04:16.081 CC lib/conf/conf.o 00:04:16.081 CC lib/vmd/vmd.o 00:04:16.081 LIB libspdk_conf.a 00:04:16.081 SO libspdk_conf.so.6.0 00:04:16.081 CC lib/vmd/led.o 00:04:16.081 SYMLINK libspdk_conf.so 00:04:16.081 CC lib/env_dpdk/init.o 00:04:16.081 CC lib/env_dpdk/threads.o 00:04:16.081 LIB libspdk_rdma_utils.a 00:04:16.081 LIB libspdk_json.a 00:04:16.081 SO libspdk_rdma_utils.so.1.0 00:04:16.081 CC lib/idxd/idxd_user.o 00:04:16.081 SO libspdk_json.so.6.0 00:04:16.081 SYMLINK libspdk_rdma_utils.so 00:04:16.081 CC lib/idxd/idxd_kernel.o 00:04:16.081 SYMLINK libspdk_json.so 00:04:16.081 CC lib/env_dpdk/pci_ioat.o 00:04:16.081 CC lib/env_dpdk/pci_virtio.o 00:04:16.081 CC lib/env_dpdk/pci_vmd.o 00:04:16.081 CC lib/env_dpdk/pci_idxd.o 00:04:16.081 CC lib/env_dpdk/pci_event.o 00:04:16.081 CC lib/rdma_provider/common.o 00:04:16.081 CC lib/env_dpdk/sigbus_handler.o 00:04:16.081 LIB libspdk_vmd.a 00:04:16.081 CC lib/env_dpdk/pci_dpdk.o 00:04:16.081 SO libspdk_vmd.so.6.0 00:04:16.081 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:16.081 SYMLINK libspdk_vmd.so 00:04:16.081 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:16.081 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:16.081 LIB libspdk_idxd.a 00:04:16.081 SO libspdk_idxd.so.12.1 00:04:16.081 CC lib/jsonrpc/jsonrpc_server.o 00:04:16.081 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:16.081 CC lib/jsonrpc/jsonrpc_client.o 00:04:16.081 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:16.081 SYMLINK libspdk_idxd.so 00:04:16.081 LIB libspdk_rdma_provider.a 00:04:16.081 SO libspdk_rdma_provider.so.7.0 00:04:16.081 SYMLINK libspdk_rdma_provider.so 00:04:16.081 LIB libspdk_jsonrpc.a 00:04:16.081 SO libspdk_jsonrpc.so.6.0 00:04:16.081 SYMLINK libspdk_jsonrpc.so 00:04:16.081 CC lib/rpc/rpc.o 00:04:16.081 LIB libspdk_env_dpdk.a 00:04:16.081 SO libspdk_env_dpdk.so.15.1 00:04:16.081 LIB libspdk_rpc.a 00:04:16.081 SO libspdk_rpc.so.6.0 00:04:16.081 SYMLINK libspdk_rpc.so 00:04:16.081 SYMLINK libspdk_env_dpdk.so 00:04:16.081 CC lib/keyring/keyring.o 00:04:16.081 CC lib/notify/notify_rpc.o 00:04:16.081 CC lib/notify/notify.o 00:04:16.081 CC lib/trace/trace.o 00:04:16.081 CC lib/keyring/keyring_rpc.o 00:04:16.081 CC lib/trace/trace_flags.o 00:04:16.081 CC lib/trace/trace_rpc.o 00:04:16.081 LIB libspdk_notify.a 00:04:16.081 SO libspdk_notify.so.6.0 00:04:16.081 LIB libspdk_keyring.a 00:04:16.081 SYMLINK libspdk_notify.so 00:04:16.081 SO libspdk_keyring.so.2.0 00:04:16.081 LIB libspdk_trace.a 00:04:16.081 SO libspdk_trace.so.11.0 00:04:16.081 SYMLINK libspdk_keyring.so 00:04:16.081 SYMLINK libspdk_trace.so 00:04:16.081 CC lib/sock/sock.o 00:04:16.081 CC lib/sock/sock_rpc.o 00:04:16.081 CC lib/thread/thread.o 00:04:16.081 CC lib/thread/iobuf.o 00:04:16.081 LIB libspdk_sock.a 00:04:16.082 SO libspdk_sock.so.10.0 00:04:16.082 SYMLINK libspdk_sock.so 00:04:16.082 CC lib/nvme/nvme_ns_cmd.o 00:04:16.082 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:16.082 CC lib/nvme/nvme_fabric.o 00:04:16.082 CC lib/nvme/nvme_pcie_common.o 00:04:16.082 CC lib/nvme/nvme_ctrlr.o 00:04:16.082 CC lib/nvme/nvme_qpair.o 00:04:16.082 CC lib/nvme/nvme.o 00:04:16.082 CC lib/nvme/nvme_pcie.o 00:04:16.082 CC lib/nvme/nvme_ns.o 00:04:16.082 CC lib/nvme/nvme_quirks.o 00:04:16.082 CC lib/nvme/nvme_transport.o 00:04:16.082 CC lib/nvme/nvme_discovery.o 00:04:16.082 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:16.082 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:16.082 LIB libspdk_thread.a 00:04:16.082 SO libspdk_thread.so.11.0 00:04:16.082 CC lib/nvme/nvme_tcp.o 00:04:16.082 SYMLINK libspdk_thread.so 00:04:16.082 CC lib/nvme/nvme_opal.o 00:04:16.082 CC lib/nvme/nvme_io_msg.o 00:04:16.082 CC lib/nvme/nvme_poll_group.o 00:04:16.082 CC lib/accel/accel.o 00:04:16.082 CC lib/nvme/nvme_zns.o 00:04:16.082 CC lib/nvme/nvme_stubs.o 00:04:16.340 CC lib/nvme/nvme_auth.o 00:04:16.340 CC lib/nvme/nvme_cuse.o 00:04:16.340 CC lib/nvme/nvme_rdma.o 00:04:16.597 CC lib/blob/blobstore.o 00:04:16.597 CC lib/blob/request.o 00:04:16.597 CC lib/blob/zeroes.o 00:04:16.597 CC lib/init/json_config.o 00:04:16.855 CC lib/init/subsystem.o 00:04:16.855 CC lib/accel/accel_rpc.o 00:04:16.855 CC lib/virtio/virtio.o 00:04:16.855 CC lib/init/subsystem_rpc.o 00:04:17.113 CC lib/fsdev/fsdev.o 00:04:17.113 CC lib/init/rpc.o 00:04:17.113 CC lib/virtio/virtio_vhost_user.o 00:04:17.113 CC lib/accel/accel_sw.o 00:04:17.113 CC lib/virtio/virtio_vfio_user.o 00:04:17.113 CC lib/virtio/virtio_pci.o 00:04:17.113 LIB libspdk_init.a 00:04:17.371 CC lib/blob/blob_bs_dev.o 00:04:17.371 SO libspdk_init.so.6.0 00:04:17.371 SYMLINK libspdk_init.so 00:04:17.371 CC lib/fsdev/fsdev_io.o 00:04:17.371 CC lib/fsdev/fsdev_rpc.o 00:04:17.371 LIB libspdk_accel.a 00:04:17.371 SO libspdk_accel.so.16.0 00:04:17.629 LIB libspdk_virtio.a 00:04:17.629 CC lib/event/app.o 00:04:17.629 CC lib/event/reactor.o 00:04:17.629 CC lib/event/log_rpc.o 00:04:17.629 CC lib/event/app_rpc.o 00:04:17.629 SO libspdk_virtio.so.7.0 00:04:17.629 SYMLINK libspdk_accel.so 00:04:17.629 SYMLINK libspdk_virtio.so 00:04:17.629 CC lib/event/scheduler_static.o 00:04:17.629 CC lib/bdev/bdev.o 00:04:17.629 CC lib/bdev/bdev_rpc.o 00:04:17.629 CC lib/bdev/bdev_zone.o 00:04:17.629 LIB libspdk_fsdev.a 00:04:17.629 SO libspdk_fsdev.so.2.0 00:04:17.629 CC lib/bdev/part.o 00:04:17.629 CC lib/bdev/scsi_nvme.o 00:04:17.887 LIB libspdk_nvme.a 00:04:17.887 SYMLINK libspdk_fsdev.so 00:04:17.887 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:17.887 SO libspdk_nvme.so.15.0 00:04:17.887 LIB libspdk_event.a 00:04:17.887 SO libspdk_event.so.14.0 00:04:18.144 SYMLINK libspdk_event.so 00:04:18.144 SYMLINK libspdk_nvme.so 00:04:18.402 LIB libspdk_fuse_dispatcher.a 00:04:18.402 SO libspdk_fuse_dispatcher.so.1.0 00:04:18.402 SYMLINK libspdk_fuse_dispatcher.so 00:04:19.778 LIB libspdk_blob.a 00:04:20.036 SO libspdk_blob.so.12.0 00:04:20.036 SYMLINK libspdk_blob.so 00:04:20.296 CC lib/blobfs/blobfs.o 00:04:20.296 CC lib/blobfs/tree.o 00:04:20.296 CC lib/lvol/lvol.o 00:04:20.296 LIB libspdk_bdev.a 00:04:20.296 SO libspdk_bdev.so.17.0 00:04:20.296 SYMLINK libspdk_bdev.so 00:04:20.558 CC lib/nbd/nbd.o 00:04:20.558 CC lib/nbd/nbd_rpc.o 00:04:20.558 CC lib/ftl/ftl_core.o 00:04:20.558 CC lib/nvmf/ctrlr.o 00:04:20.558 CC lib/ftl/ftl_init.o 00:04:20.558 CC lib/nvmf/ctrlr_discovery.o 00:04:20.558 CC lib/scsi/dev.o 00:04:20.558 CC lib/ublk/ublk.o 00:04:20.817 CC lib/ublk/ublk_rpc.o 00:04:20.817 CC lib/ftl/ftl_layout.o 00:04:20.817 CC lib/scsi/lun.o 00:04:20.817 CC lib/ftl/ftl_debug.o 00:04:20.818 CC lib/ftl/ftl_io.o 00:04:21.076 LIB libspdk_nbd.a 00:04:21.076 CC lib/scsi/port.o 00:04:21.076 SO libspdk_nbd.so.7.0 00:04:21.076 LIB libspdk_blobfs.a 00:04:21.076 CC lib/ftl/ftl_sb.o 00:04:21.076 SYMLINK libspdk_nbd.so 00:04:21.076 CC lib/scsi/scsi.o 00:04:21.076 SO libspdk_blobfs.so.11.0 00:04:21.076 CC lib/ftl/ftl_l2p.o 00:04:21.076 CC lib/ftl/ftl_l2p_flat.o 00:04:21.076 CC lib/ftl/ftl_nv_cache.o 00:04:21.076 SYMLINK libspdk_blobfs.so 00:04:21.076 CC lib/scsi/scsi_bdev.o 00:04:21.076 CC lib/scsi/scsi_pr.o 00:04:21.076 CC lib/scsi/scsi_rpc.o 00:04:21.076 LIB libspdk_lvol.a 00:04:21.334 LIB libspdk_ublk.a 00:04:21.334 SO libspdk_lvol.so.11.0 00:04:21.334 CC lib/scsi/task.o 00:04:21.334 SO libspdk_ublk.so.3.0 00:04:21.334 CC lib/nvmf/ctrlr_bdev.o 00:04:21.334 SYMLINK libspdk_lvol.so 00:04:21.334 CC lib/ftl/ftl_band.o 00:04:21.334 CC lib/ftl/ftl_band_ops.o 00:04:21.334 CC lib/nvmf/subsystem.o 00:04:21.334 SYMLINK libspdk_ublk.so 00:04:21.334 CC lib/nvmf/nvmf.o 00:04:21.334 CC lib/ftl/ftl_writer.o 00:04:21.593 CC lib/ftl/ftl_rq.o 00:04:21.593 CC lib/ftl/ftl_reloc.o 00:04:21.593 CC lib/ftl/ftl_l2p_cache.o 00:04:21.593 LIB libspdk_scsi.a 00:04:21.593 CC lib/ftl/ftl_p2l.o 00:04:21.593 CC lib/ftl/ftl_p2l_log.o 00:04:21.593 SO libspdk_scsi.so.9.0 00:04:21.852 SYMLINK libspdk_scsi.so 00:04:21.852 CC lib/ftl/mngt/ftl_mngt.o 00:04:21.852 CC lib/iscsi/conn.o 00:04:21.852 CC lib/iscsi/init_grp.o 00:04:21.852 CC lib/iscsi/iscsi.o 00:04:22.110 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:22.110 CC lib/iscsi/param.o 00:04:22.110 CC lib/nvmf/nvmf_rpc.o 00:04:22.110 CC lib/iscsi/portal_grp.o 00:04:22.110 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:22.110 CC lib/iscsi/tgt_node.o 00:04:22.369 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:22.369 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:22.369 CC lib/vhost/vhost.o 00:04:22.369 CC lib/vhost/vhost_rpc.o 00:04:22.369 CC lib/nvmf/transport.o 00:04:22.369 CC lib/iscsi/iscsi_subsystem.o 00:04:22.627 CC lib/iscsi/iscsi_rpc.o 00:04:22.627 CC lib/iscsi/task.o 00:04:22.627 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:22.627 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:22.886 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:22.886 CC lib/nvmf/tcp.o 00:04:22.886 CC lib/nvmf/stubs.o 00:04:22.886 CC lib/nvmf/mdns_server.o 00:04:22.886 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:22.886 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:22.886 CC lib/vhost/vhost_scsi.o 00:04:22.886 CC lib/vhost/vhost_blk.o 00:04:23.144 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:23.144 CC lib/nvmf/rdma.o 00:04:23.144 LIB libspdk_iscsi.a 00:04:23.144 SO libspdk_iscsi.so.8.0 00:04:23.144 CC lib/nvmf/auth.o 00:04:23.144 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:23.144 CC lib/vhost/rte_vhost_user.o 00:04:23.403 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:23.403 SYMLINK libspdk_iscsi.so 00:04:23.403 CC lib/ftl/utils/ftl_conf.o 00:04:23.403 CC lib/ftl/utils/ftl_md.o 00:04:23.403 CC lib/ftl/utils/ftl_mempool.o 00:04:23.403 CC lib/ftl/utils/ftl_bitmap.o 00:04:23.662 CC lib/ftl/utils/ftl_property.o 00:04:23.662 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:23.662 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:23.662 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:23.662 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:23.662 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:23.921 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:23.921 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:23.921 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:23.921 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:23.921 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:23.921 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:23.921 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:23.921 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:23.921 CC lib/ftl/base/ftl_base_dev.o 00:04:23.921 CC lib/ftl/base/ftl_base_bdev.o 00:04:24.179 LIB libspdk_vhost.a 00:04:24.179 CC lib/ftl/ftl_trace.o 00:04:24.179 SO libspdk_vhost.so.8.0 00:04:24.179 SYMLINK libspdk_vhost.so 00:04:24.179 LIB libspdk_ftl.a 00:04:24.437 SO libspdk_ftl.so.9.0 00:04:24.694 SYMLINK libspdk_ftl.so 00:04:25.260 LIB libspdk_nvmf.a 00:04:25.260 SO libspdk_nvmf.so.20.0 00:04:25.519 SYMLINK libspdk_nvmf.so 00:04:25.776 CC module/env_dpdk/env_dpdk_rpc.o 00:04:25.776 CC module/accel/dsa/accel_dsa.o 00:04:25.776 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:25.776 CC module/accel/error/accel_error.o 00:04:25.776 CC module/blob/bdev/blob_bdev.o 00:04:25.776 CC module/keyring/file/keyring.o 00:04:25.776 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:25.776 CC module/accel/ioat/accel_ioat.o 00:04:25.776 CC module/fsdev/aio/fsdev_aio.o 00:04:25.776 CC module/sock/posix/posix.o 00:04:25.776 LIB libspdk_env_dpdk_rpc.a 00:04:25.776 SO libspdk_env_dpdk_rpc.so.6.0 00:04:26.034 LIB libspdk_scheduler_dpdk_governor.a 00:04:26.034 SYMLINK libspdk_env_dpdk_rpc.so 00:04:26.034 CC module/accel/ioat/accel_ioat_rpc.o 00:04:26.034 CC module/keyring/file/keyring_rpc.o 00:04:26.034 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:26.034 CC module/accel/dsa/accel_dsa_rpc.o 00:04:26.034 LIB libspdk_scheduler_dynamic.a 00:04:26.034 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:26.034 CC module/accel/error/accel_error_rpc.o 00:04:26.034 LIB libspdk_blob_bdev.a 00:04:26.034 SO libspdk_scheduler_dynamic.so.4.0 00:04:26.034 SO libspdk_blob_bdev.so.12.0 00:04:26.034 LIB libspdk_keyring_file.a 00:04:26.034 LIB libspdk_accel_ioat.a 00:04:26.034 SO libspdk_keyring_file.so.2.0 00:04:26.034 SYMLINK libspdk_scheduler_dynamic.so 00:04:26.034 SO libspdk_accel_ioat.so.6.0 00:04:26.034 LIB libspdk_accel_dsa.a 00:04:26.034 SYMLINK libspdk_blob_bdev.so 00:04:26.034 SO libspdk_accel_dsa.so.5.0 00:04:26.034 SYMLINK libspdk_keyring_file.so 00:04:26.034 SYMLINK libspdk_accel_ioat.so 00:04:26.034 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:26.034 SYMLINK libspdk_accel_dsa.so 00:04:26.034 LIB libspdk_accel_error.a 00:04:26.034 CC module/fsdev/aio/linux_aio_mgr.o 00:04:26.034 CC module/accel/iaa/accel_iaa.o 00:04:26.292 SO libspdk_accel_error.so.2.0 00:04:26.292 CC module/scheduler/gscheduler/gscheduler.o 00:04:26.292 CC module/keyring/linux/keyring.o 00:04:26.292 SYMLINK libspdk_accel_error.so 00:04:26.293 CC module/accel/iaa/accel_iaa_rpc.o 00:04:26.293 CC module/keyring/linux/keyring_rpc.o 00:04:26.293 CC module/bdev/delay/vbdev_delay.o 00:04:26.293 LIB libspdk_accel_iaa.a 00:04:26.293 CC module/blobfs/bdev/blobfs_bdev.o 00:04:26.293 SO libspdk_accel_iaa.so.3.0 00:04:26.293 LIB libspdk_scheduler_gscheduler.a 00:04:26.293 SO libspdk_scheduler_gscheduler.so.4.0 00:04:26.550 LIB libspdk_keyring_linux.a 00:04:26.550 CC module/bdev/error/vbdev_error.o 00:04:26.550 SYMLINK libspdk_accel_iaa.so 00:04:26.550 CC module/bdev/error/vbdev_error_rpc.o 00:04:26.550 SO libspdk_keyring_linux.so.1.0 00:04:26.550 SYMLINK libspdk_scheduler_gscheduler.so 00:04:26.550 CC module/bdev/gpt/gpt.o 00:04:26.550 CC module/bdev/lvol/vbdev_lvol.o 00:04:26.550 LIB libspdk_fsdev_aio.a 00:04:26.550 SYMLINK libspdk_keyring_linux.so 00:04:26.550 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:26.550 CC module/bdev/gpt/vbdev_gpt.o 00:04:26.550 SO libspdk_fsdev_aio.so.1.0 00:04:26.550 LIB libspdk_sock_posix.a 00:04:26.550 SO libspdk_sock_posix.so.6.0 00:04:26.550 SYMLINK libspdk_fsdev_aio.so 00:04:26.550 CC module/bdev/malloc/bdev_malloc.o 00:04:26.550 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:26.550 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:26.550 LIB libspdk_blobfs_bdev.a 00:04:26.807 SYMLINK libspdk_sock_posix.so 00:04:26.807 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:26.807 LIB libspdk_bdev_error.a 00:04:26.807 SO libspdk_blobfs_bdev.so.6.0 00:04:26.807 SO libspdk_bdev_error.so.6.0 00:04:26.807 SYMLINK libspdk_blobfs_bdev.so 00:04:26.807 LIB libspdk_bdev_gpt.a 00:04:26.807 CC module/bdev/null/bdev_null.o 00:04:26.807 LIB libspdk_bdev_delay.a 00:04:26.807 SYMLINK libspdk_bdev_error.so 00:04:26.807 SO libspdk_bdev_gpt.so.6.0 00:04:26.807 SO libspdk_bdev_delay.so.6.0 00:04:26.807 CC module/bdev/null/bdev_null_rpc.o 00:04:26.807 SYMLINK libspdk_bdev_gpt.so 00:04:26.807 SYMLINK libspdk_bdev_delay.so 00:04:26.807 CC module/bdev/nvme/bdev_nvme.o 00:04:26.807 CC module/bdev/passthru/vbdev_passthru.o 00:04:26.807 CC module/bdev/raid/bdev_raid.o 00:04:26.807 CC module/bdev/raid/bdev_raid_rpc.o 00:04:27.065 LIB libspdk_bdev_lvol.a 00:04:27.065 CC module/bdev/split/vbdev_split.o 00:04:27.065 LIB libspdk_bdev_malloc.a 00:04:27.065 SO libspdk_bdev_lvol.so.6.0 00:04:27.065 CC module/bdev/split/vbdev_split_rpc.o 00:04:27.066 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:27.066 LIB libspdk_bdev_null.a 00:04:27.066 SO libspdk_bdev_malloc.so.6.0 00:04:27.066 SYMLINK libspdk_bdev_lvol.so 00:04:27.066 SO libspdk_bdev_null.so.6.0 00:04:27.066 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:27.066 SYMLINK libspdk_bdev_malloc.so 00:04:27.066 SYMLINK libspdk_bdev_null.so 00:04:27.066 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:27.066 CC module/bdev/raid/bdev_raid_sb.o 00:04:27.323 LIB libspdk_bdev_split.a 00:04:27.323 CC module/bdev/nvme/nvme_rpc.o 00:04:27.323 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:27.323 CC module/bdev/xnvme/bdev_xnvme.o 00:04:27.323 SO libspdk_bdev_split.so.6.0 00:04:27.323 CC module/bdev/aio/bdev_aio.o 00:04:27.323 SYMLINK libspdk_bdev_split.so 00:04:27.323 LIB libspdk_bdev_zone_block.a 00:04:27.323 SO libspdk_bdev_zone_block.so.6.0 00:04:27.323 LIB libspdk_bdev_passthru.a 00:04:27.323 SO libspdk_bdev_passthru.so.6.0 00:04:27.323 CC module/bdev/aio/bdev_aio_rpc.o 00:04:27.582 SYMLINK libspdk_bdev_zone_block.so 00:04:27.582 CC module/bdev/nvme/bdev_mdns_client.o 00:04:27.582 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:27.582 SYMLINK libspdk_bdev_passthru.so 00:04:27.582 CC module/bdev/ftl/bdev_ftl.o 00:04:27.582 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:27.582 CC module/bdev/nvme/vbdev_opal.o 00:04:27.582 LIB libspdk_bdev_xnvme.a 00:04:27.582 LIB libspdk_bdev_aio.a 00:04:27.582 CC module/bdev/iscsi/bdev_iscsi.o 00:04:27.582 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:27.582 SO libspdk_bdev_xnvme.so.3.0 00:04:27.582 SO libspdk_bdev_aio.so.6.0 00:04:27.582 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:27.916 SYMLINK libspdk_bdev_xnvme.so 00:04:27.916 SYMLINK libspdk_bdev_aio.so 00:04:27.916 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:27.916 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:27.916 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:27.916 LIB libspdk_bdev_ftl.a 00:04:27.916 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:27.916 SO libspdk_bdev_ftl.so.6.0 00:04:27.916 CC module/bdev/raid/raid0.o 00:04:27.916 SYMLINK libspdk_bdev_ftl.so 00:04:27.916 CC module/bdev/raid/raid1.o 00:04:27.916 CC module/bdev/raid/concat.o 00:04:28.211 LIB libspdk_bdev_iscsi.a 00:04:28.211 SO libspdk_bdev_iscsi.so.6.0 00:04:28.211 SYMLINK libspdk_bdev_iscsi.so 00:04:28.211 LIB libspdk_bdev_raid.a 00:04:28.211 SO libspdk_bdev_raid.so.6.0 00:04:28.211 LIB libspdk_bdev_virtio.a 00:04:28.211 SO libspdk_bdev_virtio.so.6.0 00:04:28.211 SYMLINK libspdk_bdev_raid.so 00:04:28.477 SYMLINK libspdk_bdev_virtio.so 00:04:29.043 LIB libspdk_bdev_nvme.a 00:04:29.300 SO libspdk_bdev_nvme.so.7.1 00:04:29.300 SYMLINK libspdk_bdev_nvme.so 00:04:29.866 CC module/event/subsystems/vmd/vmd.o 00:04:29.866 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:29.866 CC module/event/subsystems/fsdev/fsdev.o 00:04:29.866 CC module/event/subsystems/scheduler/scheduler.o 00:04:29.866 CC module/event/subsystems/sock/sock.o 00:04:29.866 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:29.866 CC module/event/subsystems/keyring/keyring.o 00:04:29.866 CC module/event/subsystems/iobuf/iobuf.o 00:04:29.866 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:29.866 LIB libspdk_event_fsdev.a 00:04:29.866 LIB libspdk_event_keyring.a 00:04:29.866 SO libspdk_event_fsdev.so.1.0 00:04:29.866 LIB libspdk_event_vhost_blk.a 00:04:29.866 LIB libspdk_event_vmd.a 00:04:29.866 LIB libspdk_event_scheduler.a 00:04:29.866 LIB libspdk_event_sock.a 00:04:29.866 SO libspdk_event_keyring.so.1.0 00:04:29.866 LIB libspdk_event_iobuf.a 00:04:29.866 SO libspdk_event_vhost_blk.so.3.0 00:04:29.866 SO libspdk_event_vmd.so.6.0 00:04:29.866 SO libspdk_event_scheduler.so.4.0 00:04:29.866 SO libspdk_event_sock.so.5.0 00:04:29.866 SYMLINK libspdk_event_fsdev.so 00:04:29.866 SO libspdk_event_iobuf.so.3.0 00:04:29.866 SYMLINK libspdk_event_keyring.so 00:04:29.866 SYMLINK libspdk_event_vhost_blk.so 00:04:29.866 SYMLINK libspdk_event_vmd.so 00:04:29.866 SYMLINK libspdk_event_scheduler.so 00:04:29.866 SYMLINK libspdk_event_sock.so 00:04:29.866 SYMLINK libspdk_event_iobuf.so 00:04:30.122 CC module/event/subsystems/accel/accel.o 00:04:30.378 LIB libspdk_event_accel.a 00:04:30.378 SO libspdk_event_accel.so.6.0 00:04:30.378 SYMLINK libspdk_event_accel.so 00:04:30.635 CC module/event/subsystems/bdev/bdev.o 00:04:30.635 LIB libspdk_event_bdev.a 00:04:30.892 SO libspdk_event_bdev.so.6.0 00:04:30.892 SYMLINK libspdk_event_bdev.so 00:04:30.892 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:30.892 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:30.892 CC module/event/subsystems/ublk/ublk.o 00:04:30.892 CC module/event/subsystems/scsi/scsi.o 00:04:31.149 CC module/event/subsystems/nbd/nbd.o 00:04:31.149 LIB libspdk_event_nbd.a 00:04:31.149 LIB libspdk_event_ublk.a 00:04:31.149 SO libspdk_event_nbd.so.6.0 00:04:31.149 LIB libspdk_event_scsi.a 00:04:31.149 SO libspdk_event_ublk.so.3.0 00:04:31.149 SO libspdk_event_scsi.so.6.0 00:04:31.149 SYMLINK libspdk_event_nbd.so 00:04:31.149 SYMLINK libspdk_event_ublk.so 00:04:31.149 LIB libspdk_event_nvmf.a 00:04:31.149 SYMLINK libspdk_event_scsi.so 00:04:31.149 SO libspdk_event_nvmf.so.6.0 00:04:31.406 SYMLINK libspdk_event_nvmf.so 00:04:31.406 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:31.406 CC module/event/subsystems/iscsi/iscsi.o 00:04:31.664 LIB libspdk_event_vhost_scsi.a 00:04:31.664 LIB libspdk_event_iscsi.a 00:04:31.664 SO libspdk_event_vhost_scsi.so.3.0 00:04:31.664 SO libspdk_event_iscsi.so.6.0 00:04:31.664 SYMLINK libspdk_event_vhost_scsi.so 00:04:31.664 SYMLINK libspdk_event_iscsi.so 00:04:31.664 SO libspdk.so.6.0 00:04:31.664 SYMLINK libspdk.so 00:04:31.921 TEST_HEADER include/spdk/accel.h 00:04:31.921 TEST_HEADER include/spdk/accel_module.h 00:04:31.921 CC test/rpc_client/rpc_client_test.o 00:04:31.921 TEST_HEADER include/spdk/assert.h 00:04:31.921 CXX app/trace/trace.o 00:04:31.921 TEST_HEADER include/spdk/barrier.h 00:04:31.921 TEST_HEADER include/spdk/base64.h 00:04:31.921 TEST_HEADER include/spdk/bdev.h 00:04:31.921 TEST_HEADER include/spdk/bdev_module.h 00:04:31.921 TEST_HEADER include/spdk/bdev_zone.h 00:04:31.921 TEST_HEADER include/spdk/bit_array.h 00:04:31.921 TEST_HEADER include/spdk/bit_pool.h 00:04:31.921 TEST_HEADER include/spdk/blob_bdev.h 00:04:31.921 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:31.921 TEST_HEADER include/spdk/blobfs.h 00:04:31.921 TEST_HEADER include/spdk/blob.h 00:04:31.921 TEST_HEADER include/spdk/conf.h 00:04:31.921 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:31.921 TEST_HEADER include/spdk/config.h 00:04:31.921 TEST_HEADER include/spdk/cpuset.h 00:04:31.921 TEST_HEADER include/spdk/crc16.h 00:04:31.921 TEST_HEADER include/spdk/crc32.h 00:04:31.921 TEST_HEADER include/spdk/crc64.h 00:04:31.921 TEST_HEADER include/spdk/dif.h 00:04:31.921 TEST_HEADER include/spdk/dma.h 00:04:31.921 TEST_HEADER include/spdk/endian.h 00:04:31.921 TEST_HEADER include/spdk/env_dpdk.h 00:04:31.921 TEST_HEADER include/spdk/env.h 00:04:31.921 TEST_HEADER include/spdk/event.h 00:04:31.921 TEST_HEADER include/spdk/fd_group.h 00:04:31.921 TEST_HEADER include/spdk/fd.h 00:04:31.921 TEST_HEADER include/spdk/file.h 00:04:31.921 TEST_HEADER include/spdk/fsdev.h 00:04:31.921 TEST_HEADER include/spdk/fsdev_module.h 00:04:31.921 TEST_HEADER include/spdk/ftl.h 00:04:31.921 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:31.921 CC examples/ioat/perf/perf.o 00:04:31.921 TEST_HEADER include/spdk/gpt_spec.h 00:04:31.921 CC examples/util/zipf/zipf.o 00:04:31.921 TEST_HEADER include/spdk/hexlify.h 00:04:31.921 TEST_HEADER include/spdk/histogram_data.h 00:04:31.921 TEST_HEADER include/spdk/idxd.h 00:04:31.921 TEST_HEADER include/spdk/idxd_spec.h 00:04:31.921 TEST_HEADER include/spdk/init.h 00:04:31.921 TEST_HEADER include/spdk/ioat.h 00:04:31.921 CC test/thread/poller_perf/poller_perf.o 00:04:31.921 TEST_HEADER include/spdk/ioat_spec.h 00:04:31.921 TEST_HEADER include/spdk/iscsi_spec.h 00:04:31.921 TEST_HEADER include/spdk/json.h 00:04:31.921 TEST_HEADER include/spdk/jsonrpc.h 00:04:31.921 TEST_HEADER include/spdk/keyring.h 00:04:31.921 TEST_HEADER include/spdk/keyring_module.h 00:04:31.921 TEST_HEADER include/spdk/likely.h 00:04:31.921 TEST_HEADER include/spdk/log.h 00:04:32.178 TEST_HEADER include/spdk/lvol.h 00:04:32.178 TEST_HEADER include/spdk/md5.h 00:04:32.178 TEST_HEADER include/spdk/memory.h 00:04:32.178 TEST_HEADER include/spdk/mmio.h 00:04:32.178 CC test/dma/test_dma/test_dma.o 00:04:32.178 TEST_HEADER include/spdk/nbd.h 00:04:32.178 TEST_HEADER include/spdk/net.h 00:04:32.178 TEST_HEADER include/spdk/notify.h 00:04:32.178 TEST_HEADER include/spdk/nvme.h 00:04:32.178 TEST_HEADER include/spdk/nvme_intel.h 00:04:32.178 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:32.178 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:32.178 TEST_HEADER include/spdk/nvme_spec.h 00:04:32.178 TEST_HEADER include/spdk/nvme_zns.h 00:04:32.178 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:32.178 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:32.178 TEST_HEADER include/spdk/nvmf.h 00:04:32.178 TEST_HEADER include/spdk/nvmf_spec.h 00:04:32.178 CC test/env/mem_callbacks/mem_callbacks.o 00:04:32.178 TEST_HEADER include/spdk/nvmf_transport.h 00:04:32.178 TEST_HEADER include/spdk/opal.h 00:04:32.178 TEST_HEADER include/spdk/opal_spec.h 00:04:32.178 TEST_HEADER include/spdk/pci_ids.h 00:04:32.178 CC test/app/bdev_svc/bdev_svc.o 00:04:32.178 TEST_HEADER include/spdk/pipe.h 00:04:32.178 TEST_HEADER include/spdk/queue.h 00:04:32.178 TEST_HEADER include/spdk/reduce.h 00:04:32.178 TEST_HEADER include/spdk/rpc.h 00:04:32.178 TEST_HEADER include/spdk/scheduler.h 00:04:32.178 TEST_HEADER include/spdk/scsi.h 00:04:32.178 TEST_HEADER include/spdk/scsi_spec.h 00:04:32.178 TEST_HEADER include/spdk/sock.h 00:04:32.178 TEST_HEADER include/spdk/stdinc.h 00:04:32.178 TEST_HEADER include/spdk/string.h 00:04:32.178 TEST_HEADER include/spdk/thread.h 00:04:32.178 TEST_HEADER include/spdk/trace.h 00:04:32.178 TEST_HEADER include/spdk/trace_parser.h 00:04:32.178 TEST_HEADER include/spdk/tree.h 00:04:32.178 TEST_HEADER include/spdk/ublk.h 00:04:32.178 TEST_HEADER include/spdk/util.h 00:04:32.178 LINK rpc_client_test 00:04:32.178 TEST_HEADER include/spdk/uuid.h 00:04:32.178 TEST_HEADER include/spdk/version.h 00:04:32.178 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:32.178 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:32.179 TEST_HEADER include/spdk/vhost.h 00:04:32.179 TEST_HEADER include/spdk/vmd.h 00:04:32.179 TEST_HEADER include/spdk/xor.h 00:04:32.179 TEST_HEADER include/spdk/zipf.h 00:04:32.179 CXX test/cpp_headers/accel.o 00:04:32.179 LINK interrupt_tgt 00:04:32.179 LINK zipf 00:04:32.179 LINK poller_perf 00:04:32.179 LINK ioat_perf 00:04:32.179 LINK bdev_svc 00:04:32.179 CXX test/cpp_headers/accel_module.o 00:04:32.179 LINK spdk_trace 00:04:32.179 CXX test/cpp_headers/assert.o 00:04:32.436 CXX test/cpp_headers/barrier.o 00:04:32.436 CC test/env/vtophys/vtophys.o 00:04:32.436 CXX test/cpp_headers/base64.o 00:04:32.436 CC examples/ioat/verify/verify.o 00:04:32.436 CXX test/cpp_headers/bdev.o 00:04:32.436 CC examples/thread/thread/thread_ex.o 00:04:32.436 CC app/trace_record/trace_record.o 00:04:32.436 LINK vtophys 00:04:32.436 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:32.436 LINK test_dma 00:04:32.693 CXX test/cpp_headers/bdev_module.o 00:04:32.693 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:32.693 LINK mem_callbacks 00:04:32.693 LINK verify 00:04:32.693 CC test/event/event_perf/event_perf.o 00:04:32.693 LINK thread 00:04:32.693 LINK spdk_trace_record 00:04:32.693 LINK env_dpdk_post_init 00:04:32.693 CC app/nvmf_tgt/nvmf_main.o 00:04:32.693 CXX test/cpp_headers/bdev_zone.o 00:04:32.693 CC test/app/histogram_perf/histogram_perf.o 00:04:32.693 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:32.950 LINK event_perf 00:04:32.950 CC test/event/reactor/reactor.o 00:04:32.950 LINK histogram_perf 00:04:32.950 LINK nvme_fuzz 00:04:32.950 CC test/env/memory/memory_ut.o 00:04:32.950 LINK nvmf_tgt 00:04:32.950 CXX test/cpp_headers/bit_array.o 00:04:32.950 LINK reactor 00:04:32.950 CC examples/sock/hello_world/hello_sock.o 00:04:32.950 CC examples/vmd/lsvmd/lsvmd.o 00:04:32.950 CXX test/cpp_headers/bit_pool.o 00:04:33.208 CC examples/idxd/perf/perf.o 00:04:33.208 LINK lsvmd 00:04:33.208 CC test/event/reactor_perf/reactor_perf.o 00:04:33.208 CC test/app/jsoncat/jsoncat.o 00:04:33.208 CC test/event/app_repeat/app_repeat.o 00:04:33.208 CXX test/cpp_headers/blob_bdev.o 00:04:33.208 LINK hello_sock 00:04:33.208 CC app/iscsi_tgt/iscsi_tgt.o 00:04:33.208 LINK reactor_perf 00:04:33.208 LINK jsoncat 00:04:33.466 CXX test/cpp_headers/blobfs_bdev.o 00:04:33.466 LINK app_repeat 00:04:33.466 CC examples/vmd/led/led.o 00:04:33.466 LINK idxd_perf 00:04:33.466 CC app/spdk_lspci/spdk_lspci.o 00:04:33.466 LINK iscsi_tgt 00:04:33.466 CXX test/cpp_headers/blobfs.o 00:04:33.466 CC app/spdk_tgt/spdk_tgt.o 00:04:33.466 CC app/spdk_nvme_perf/perf.o 00:04:33.466 CC test/event/scheduler/scheduler.o 00:04:33.466 LINK led 00:04:33.724 LINK spdk_lspci 00:04:33.724 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:33.724 CXX test/cpp_headers/blob.o 00:04:33.724 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:33.724 LINK spdk_tgt 00:04:33.724 CXX test/cpp_headers/conf.o 00:04:33.724 LINK scheduler 00:04:33.982 CC examples/accel/perf/accel_perf.o 00:04:33.982 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:33.982 CXX test/cpp_headers/config.o 00:04:33.982 CC examples/blob/hello_world/hello_blob.o 00:04:33.982 CXX test/cpp_headers/cpuset.o 00:04:33.982 CXX test/cpp_headers/crc16.o 00:04:33.982 LINK vhost_fuzz 00:04:33.982 LINK memory_ut 00:04:33.982 CC examples/nvme/hello_world/hello_world.o 00:04:33.982 CC examples/blob/cli/blobcli.o 00:04:34.240 LINK hello_blob 00:04:34.240 LINK hello_fsdev 00:04:34.240 CXX test/cpp_headers/crc32.o 00:04:34.240 LINK spdk_nvme_perf 00:04:34.240 LINK hello_world 00:04:34.240 CC examples/nvme/reconnect/reconnect.o 00:04:34.240 CC test/env/pci/pci_ut.o 00:04:34.240 CXX test/cpp_headers/crc64.o 00:04:34.498 LINK accel_perf 00:04:34.498 CC test/app/stub/stub.o 00:04:34.498 CC app/spdk_nvme_identify/identify.o 00:04:34.498 CXX test/cpp_headers/dif.o 00:04:34.498 CC test/accel/dif/dif.o 00:04:34.498 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:34.498 LINK reconnect 00:04:34.498 LINK blobcli 00:04:34.498 LINK stub 00:04:34.498 LINK iscsi_fuzz 00:04:34.498 CC examples/nvme/arbitration/arbitration.o 00:04:34.498 CXX test/cpp_headers/dma.o 00:04:34.757 LINK pci_ut 00:04:34.757 CXX test/cpp_headers/endian.o 00:04:34.757 CC examples/nvme/hotplug/hotplug.o 00:04:34.757 CXX test/cpp_headers/env_dpdk.o 00:04:35.017 CC examples/bdev/hello_world/hello_bdev.o 00:04:35.017 CC test/blobfs/mkfs/mkfs.o 00:04:35.017 LINK arbitration 00:04:35.017 CC test/lvol/esnap/esnap.o 00:04:35.017 CXX test/cpp_headers/env.o 00:04:35.017 CC test/nvme/aer/aer.o 00:04:35.017 LINK nvme_manage 00:04:35.017 LINK hotplug 00:04:35.017 LINK mkfs 00:04:35.017 LINK hello_bdev 00:04:35.017 CXX test/cpp_headers/event.o 00:04:35.274 CXX test/cpp_headers/fd_group.o 00:04:35.274 LINK dif 00:04:35.274 CC test/nvme/reset/reset.o 00:04:35.274 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:35.274 CC test/nvme/sgl/sgl.o 00:04:35.274 CXX test/cpp_headers/fd.o 00:04:35.274 LINK aer 00:04:35.274 LINK spdk_nvme_identify 00:04:35.274 CC examples/bdev/bdevperf/bdevperf.o 00:04:35.274 LINK cmb_copy 00:04:35.274 CXX test/cpp_headers/file.o 00:04:35.530 CC test/nvme/e2edp/nvme_dp.o 00:04:35.530 LINK reset 00:04:35.530 CC test/nvme/overhead/overhead.o 00:04:35.530 CXX test/cpp_headers/fsdev.o 00:04:35.530 CC app/spdk_nvme_discover/discovery_aer.o 00:04:35.530 LINK sgl 00:04:35.530 CC examples/nvme/abort/abort.o 00:04:35.530 CC test/bdev/bdevio/bdevio.o 00:04:35.788 CXX test/cpp_headers/fsdev_module.o 00:04:35.788 CXX test/cpp_headers/ftl.o 00:04:35.788 CC test/nvme/err_injection/err_injection.o 00:04:35.788 LINK spdk_nvme_discover 00:04:35.788 LINK nvme_dp 00:04:35.788 LINK overhead 00:04:35.788 CXX test/cpp_headers/fuse_dispatcher.o 00:04:35.788 LINK err_injection 00:04:35.788 CC test/nvme/startup/startup.o 00:04:36.046 CC app/spdk_top/spdk_top.o 00:04:36.046 CC test/nvme/reserve/reserve.o 00:04:36.046 CXX test/cpp_headers/gpt_spec.o 00:04:36.046 LINK abort 00:04:36.046 CC test/nvme/simple_copy/simple_copy.o 00:04:36.046 LINK bdevio 00:04:36.046 LINK bdevperf 00:04:36.046 LINK startup 00:04:36.046 CXX test/cpp_headers/hexlify.o 00:04:36.046 CC app/vhost/vhost.o 00:04:36.046 LINK reserve 00:04:36.304 CXX test/cpp_headers/histogram_data.o 00:04:36.304 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:36.304 LINK simple_copy 00:04:36.304 CC test/nvme/connect_stress/connect_stress.o 00:04:36.304 CC test/nvme/boot_partition/boot_partition.o 00:04:36.304 CC test/nvme/compliance/nvme_compliance.o 00:04:36.304 LINK vhost 00:04:36.304 CC test/nvme/fused_ordering/fused_ordering.o 00:04:36.304 CXX test/cpp_headers/idxd.o 00:04:36.304 LINK pmr_persistence 00:04:36.560 LINK boot_partition 00:04:36.560 LINK connect_stress 00:04:36.560 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:36.560 CXX test/cpp_headers/idxd_spec.o 00:04:36.560 CXX test/cpp_headers/init.o 00:04:36.560 LINK fused_ordering 00:04:36.560 LINK nvme_compliance 00:04:36.560 LINK doorbell_aers 00:04:36.560 CXX test/cpp_headers/ioat.o 00:04:36.560 CC test/nvme/fdp/fdp.o 00:04:36.560 CC test/nvme/cuse/cuse.o 00:04:36.560 CXX test/cpp_headers/ioat_spec.o 00:04:36.818 CXX test/cpp_headers/iscsi_spec.o 00:04:36.818 CC examples/nvmf/nvmf/nvmf.o 00:04:36.818 CXX test/cpp_headers/json.o 00:04:36.818 CXX test/cpp_headers/jsonrpc.o 00:04:36.818 CXX test/cpp_headers/keyring.o 00:04:36.818 CXX test/cpp_headers/keyring_module.o 00:04:36.818 CXX test/cpp_headers/likely.o 00:04:36.818 LINK spdk_top 00:04:36.818 CC app/spdk_dd/spdk_dd.o 00:04:36.818 CXX test/cpp_headers/log.o 00:04:36.818 CXX test/cpp_headers/lvol.o 00:04:37.077 LINK nvmf 00:04:37.077 LINK fdp 00:04:37.077 CXX test/cpp_headers/md5.o 00:04:37.077 CXX test/cpp_headers/memory.o 00:04:37.077 CXX test/cpp_headers/mmio.o 00:04:37.077 CXX test/cpp_headers/nbd.o 00:04:37.077 CC app/fio/nvme/fio_plugin.o 00:04:37.077 CXX test/cpp_headers/net.o 00:04:37.077 CXX test/cpp_headers/notify.o 00:04:37.077 CXX test/cpp_headers/nvme.o 00:04:37.077 CC app/fio/bdev/fio_plugin.o 00:04:37.077 CXX test/cpp_headers/nvme_intel.o 00:04:37.335 CXX test/cpp_headers/nvme_ocssd.o 00:04:37.335 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:37.335 LINK spdk_dd 00:04:37.335 CXX test/cpp_headers/nvme_spec.o 00:04:37.335 CXX test/cpp_headers/nvme_zns.o 00:04:37.335 CXX test/cpp_headers/nvmf_cmd.o 00:04:37.335 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:37.335 CXX test/cpp_headers/nvmf.o 00:04:37.594 CXX test/cpp_headers/nvmf_spec.o 00:04:37.594 CXX test/cpp_headers/nvmf_transport.o 00:04:37.594 CXX test/cpp_headers/opal.o 00:04:37.594 CXX test/cpp_headers/opal_spec.o 00:04:37.594 LINK spdk_nvme 00:04:37.594 CXX test/cpp_headers/pci_ids.o 00:04:37.594 CXX test/cpp_headers/pipe.o 00:04:37.594 CXX test/cpp_headers/queue.o 00:04:37.594 CXX test/cpp_headers/reduce.o 00:04:37.594 CXX test/cpp_headers/rpc.o 00:04:37.594 CXX test/cpp_headers/scheduler.o 00:04:37.594 CXX test/cpp_headers/scsi.o 00:04:37.594 LINK spdk_bdev 00:04:37.851 CXX test/cpp_headers/scsi_spec.o 00:04:37.852 CXX test/cpp_headers/sock.o 00:04:37.852 CXX test/cpp_headers/stdinc.o 00:04:37.852 CXX test/cpp_headers/string.o 00:04:37.852 CXX test/cpp_headers/thread.o 00:04:37.852 CXX test/cpp_headers/trace.o 00:04:37.852 CXX test/cpp_headers/trace_parser.o 00:04:37.852 CXX test/cpp_headers/tree.o 00:04:37.852 CXX test/cpp_headers/ublk.o 00:04:37.852 CXX test/cpp_headers/util.o 00:04:37.852 CXX test/cpp_headers/uuid.o 00:04:37.852 CXX test/cpp_headers/version.o 00:04:37.852 CXX test/cpp_headers/vfio_user_pci.o 00:04:37.852 CXX test/cpp_headers/vfio_user_spec.o 00:04:37.852 LINK cuse 00:04:37.852 CXX test/cpp_headers/vhost.o 00:04:37.852 CXX test/cpp_headers/vmd.o 00:04:37.852 CXX test/cpp_headers/xor.o 00:04:38.110 CXX test/cpp_headers/zipf.o 00:04:40.639 LINK esnap 00:04:40.639 00:04:40.639 real 1m7.033s 00:04:40.639 user 5m24.004s 00:04:40.639 sys 0m56.893s 00:04:40.639 ************************************ 00:04:40.639 END TEST make 00:04:40.639 ************************************ 00:04:40.639 00:50:03 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:40.639 00:50:03 make -- common/autotest_common.sh@10 -- $ set +x 00:04:40.639 00:50:03 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:40.639 00:50:03 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:40.639 00:50:03 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:40.639 00:50:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.639 00:50:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:40.639 00:50:03 -- pm/common@44 -- $ pid=5806 00:04:40.639 00:50:03 -- pm/common@50 -- $ kill -TERM 5806 00:04:40.639 00:50:03 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.639 00:50:03 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:40.639 00:50:03 -- pm/common@44 -- $ pid=5808 00:04:40.639 00:50:03 -- pm/common@50 -- $ kill -TERM 5808 00:04:40.639 00:50:03 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:40.639 00:50:03 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:40.639 00:50:03 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.639 00:50:03 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.639 00:50:03 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.897 00:50:03 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.897 00:50:03 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.897 00:50:03 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.897 00:50:03 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.897 00:50:03 -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.897 00:50:03 -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.897 00:50:03 -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.897 00:50:03 -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.897 00:50:03 -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.897 00:50:03 -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.897 00:50:03 -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.897 00:50:03 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.897 00:50:03 -- scripts/common.sh@344 -- # case "$op" in 00:04:40.897 00:50:03 -- scripts/common.sh@345 -- # : 1 00:04:40.897 00:50:03 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.897 00:50:03 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.897 00:50:03 -- scripts/common.sh@365 -- # decimal 1 00:04:40.897 00:50:03 -- scripts/common.sh@353 -- # local d=1 00:04:40.897 00:50:03 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.897 00:50:03 -- scripts/common.sh@355 -- # echo 1 00:04:40.897 00:50:03 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.897 00:50:03 -- scripts/common.sh@366 -- # decimal 2 00:04:40.897 00:50:03 -- scripts/common.sh@353 -- # local d=2 00:04:40.897 00:50:03 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.897 00:50:03 -- scripts/common.sh@355 -- # echo 2 00:04:40.897 00:50:03 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.897 00:50:03 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.897 00:50:03 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.897 00:50:03 -- scripts/common.sh@368 -- # return 0 00:04:40.897 00:50:03 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.897 00:50:03 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.897 --rc genhtml_branch_coverage=1 00:04:40.897 --rc genhtml_function_coverage=1 00:04:40.897 --rc genhtml_legend=1 00:04:40.897 --rc geninfo_all_blocks=1 00:04:40.897 --rc geninfo_unexecuted_blocks=1 00:04:40.897 00:04:40.897 ' 00:04:40.897 00:50:03 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.897 --rc genhtml_branch_coverage=1 00:04:40.897 --rc genhtml_function_coverage=1 00:04:40.897 --rc genhtml_legend=1 00:04:40.897 --rc geninfo_all_blocks=1 00:04:40.897 --rc geninfo_unexecuted_blocks=1 00:04:40.897 00:04:40.897 ' 00:04:40.897 00:50:03 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.897 --rc genhtml_branch_coverage=1 00:04:40.897 --rc genhtml_function_coverage=1 00:04:40.897 --rc genhtml_legend=1 00:04:40.897 --rc geninfo_all_blocks=1 00:04:40.897 --rc geninfo_unexecuted_blocks=1 00:04:40.897 00:04:40.897 ' 00:04:40.897 00:50:03 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.897 --rc genhtml_branch_coverage=1 00:04:40.897 --rc genhtml_function_coverage=1 00:04:40.898 --rc genhtml_legend=1 00:04:40.898 --rc geninfo_all_blocks=1 00:04:40.898 --rc geninfo_unexecuted_blocks=1 00:04:40.898 00:04:40.898 ' 00:04:40.898 00:50:03 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.898 00:50:03 -- nvmf/common.sh@7 -- # uname -s 00:04:40.898 00:50:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.898 00:50:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.898 00:50:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.898 00:50:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.898 00:50:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.898 00:50:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.898 00:50:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.898 00:50:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.898 00:50:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.898 00:50:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.898 00:50:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2791c0d3-aeea-4ee5-88bc-d866b798a508 00:04:40.898 00:50:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=2791c0d3-aeea-4ee5-88bc-d866b798a508 00:04:40.898 00:50:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.898 00:50:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.898 00:50:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.898 00:50:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.898 00:50:03 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.898 00:50:03 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:40.898 00:50:03 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.898 00:50:03 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.898 00:50:03 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.898 00:50:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.898 00:50:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.898 00:50:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.898 00:50:03 -- paths/export.sh@5 -- # export PATH 00:04:40.898 00:50:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.898 00:50:03 -- nvmf/common.sh@51 -- # : 0 00:04:40.898 00:50:03 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:40.898 00:50:03 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:40.898 00:50:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.898 00:50:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.898 00:50:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.898 00:50:03 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:40.898 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:40.898 00:50:03 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:40.898 00:50:03 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:40.898 00:50:03 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:40.898 00:50:03 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:40.898 00:50:03 -- spdk/autotest.sh@32 -- # uname -s 00:04:40.898 00:50:03 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:40.898 00:50:03 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:40.898 00:50:03 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.898 00:50:03 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:40.898 00:50:03 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:40.898 00:50:03 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:40.898 00:50:03 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:40.898 00:50:03 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:40.898 00:50:03 -- spdk/autotest.sh@48 -- # udevadm_pid=68136 00:04:40.898 00:50:03 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:40.898 00:50:03 -- pm/common@17 -- # local monitor 00:04:40.898 00:50:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.898 00:50:03 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:40.898 00:50:03 -- pm/common@25 -- # sleep 1 00:04:40.898 00:50:03 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:40.898 00:50:03 -- pm/common@21 -- # date +%s 00:04:40.898 00:50:03 -- pm/common@21 -- # date +%s 00:04:40.898 00:50:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732582203 00:04:40.898 00:50:03 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732582203 00:04:40.898 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732582203_collect-cpu-load.pm.log 00:04:40.898 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732582203_collect-vmstat.pm.log 00:04:41.832 00:50:04 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:41.832 00:50:04 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:41.832 00:50:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.832 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:04:41.832 00:50:04 -- spdk/autotest.sh@59 -- # create_test_list 00:04:41.832 00:50:04 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:41.832 00:50:04 -- common/autotest_common.sh@10 -- # set +x 00:04:41.832 00:50:04 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:41.832 00:50:04 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:41.832 00:50:04 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:41.832 00:50:04 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:41.832 00:50:04 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:41.832 00:50:04 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:41.832 00:50:04 -- common/autotest_common.sh@1457 -- # uname 00:04:41.832 00:50:04 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:41.832 00:50:04 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:41.832 00:50:04 -- common/autotest_common.sh@1477 -- # uname 00:04:41.832 00:50:04 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:41.832 00:50:04 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:41.832 00:50:04 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:42.091 lcov: LCOV version 1.15 00:04:42.091 00:50:04 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:56.975 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:56.975 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:11.895 00:50:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:11.895 00:50:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.895 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:05:11.895 00:50:32 -- spdk/autotest.sh@78 -- # rm -f 00:05:11.895 00:50:32 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:11.895 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.895 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:11.895 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:11.895 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:11.895 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:11.895 00:50:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:11.895 00:50:33 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:11.895 00:50:33 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:11.895 00:50:33 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:11.895 00:50:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:11.895 00:50:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:11.895 00:50:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:11.895 00:50:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:11.895 00:50:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:11.895 00:50:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:11.895 00:50:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:11.895 00:50:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:11.895 00:50:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:11.895 00:50:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:11.895 00:50:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:11.895 00:50:33 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:11.895 00:50:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:11.895 00:50:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:11.895 00:50:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:11.895 00:50:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.895 00:50:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.895 00:50:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:11.895 00:50:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:11.896 00:50:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:11.896 No valid GPT data, bailing 00:05:11.896 00:50:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.896 00:50:33 -- scripts/common.sh@394 -- # pt= 00:05:11.896 00:50:33 -- scripts/common.sh@395 -- # return 1 00:05:11.896 00:50:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:11.896 1+0 records in 00:05:11.896 1+0 records out 00:05:11.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275297 s, 38.1 MB/s 00:05:11.896 00:50:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.896 00:50:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.896 00:50:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:11.896 00:50:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:11.896 00:50:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:11.896 No valid GPT data, bailing 00:05:11.896 00:50:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:11.896 00:50:34 -- scripts/common.sh@394 -- # pt= 00:05:11.896 00:50:34 -- scripts/common.sh@395 -- # return 1 00:05:11.896 00:50:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:11.896 1+0 records in 00:05:11.896 1+0 records out 00:05:11.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630702 s, 166 MB/s 00:05:11.896 00:50:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.896 00:50:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.896 00:50:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:11.896 00:50:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:11.896 00:50:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:11.896 No valid GPT data, bailing 00:05:11.896 00:50:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:11.896 00:50:34 -- scripts/common.sh@394 -- # pt= 00:05:11.896 00:50:34 -- scripts/common.sh@395 -- # return 1 00:05:11.896 00:50:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:11.896 1+0 records in 00:05:11.896 1+0 records out 00:05:11.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00619726 s, 169 MB/s 00:05:11.896 00:50:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.896 00:50:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.896 00:50:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:11.896 00:50:34 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:11.896 00:50:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:11.896 No valid GPT data, bailing 00:05:11.896 00:50:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:11.896 00:50:34 -- scripts/common.sh@394 -- # pt= 00:05:11.896 00:50:34 -- scripts/common.sh@395 -- # return 1 00:05:11.896 00:50:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:11.896 1+0 records in 00:05:11.896 1+0 records out 00:05:11.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053957 s, 194 MB/s 00:05:11.896 00:50:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.896 00:50:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.896 00:50:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:11.896 00:50:34 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:11.896 00:50:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:11.896 No valid GPT data, bailing 00:05:11.896 00:50:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:11.896 00:50:34 -- scripts/common.sh@394 -- # pt= 00:05:11.896 00:50:34 -- scripts/common.sh@395 -- # return 1 00:05:11.896 00:50:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:11.896 1+0 records in 00:05:11.896 1+0 records out 00:05:11.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569062 s, 184 MB/s 00:05:11.896 00:50:34 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:11.896 00:50:34 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:11.896 00:50:34 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:11.896 00:50:34 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:11.896 00:50:34 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:11.896 No valid GPT data, bailing 00:05:11.896 00:50:34 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:11.896 00:50:34 -- scripts/common.sh@394 -- # pt= 00:05:11.896 00:50:34 -- scripts/common.sh@395 -- # return 1 00:05:11.896 00:50:34 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:11.896 1+0 records in 00:05:11.896 1+0 records out 00:05:11.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639851 s, 164 MB/s 00:05:11.896 00:50:34 -- spdk/autotest.sh@105 -- # sync 00:05:11.896 00:50:34 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:11.896 00:50:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:11.896 00:50:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:13.815 00:50:36 -- spdk/autotest.sh@111 -- # uname -s 00:05:13.815 00:50:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:13.815 00:50:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:13.815 00:50:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:14.076 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:14.648 Hugepages 00:05:14.648 node hugesize free / total 00:05:14.648 node0 1048576kB 0 / 0 00:05:14.648 node0 2048kB 0 / 0 00:05:14.648 00:05:14.648 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.648 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:14.648 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:14.909 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:14.909 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:14.909 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:14.909 00:50:37 -- spdk/autotest.sh@117 -- # uname -s 00:05:14.909 00:50:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:14.909 00:50:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:14.909 00:50:37 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.054 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.054 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.054 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.054 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.316 00:50:39 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:17.272 00:50:40 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:17.272 00:50:40 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:17.272 00:50:40 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:17.272 00:50:40 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:17.272 00:50:40 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:17.272 00:50:40 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:17.272 00:50:40 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.272 00:50:40 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:17.272 00:50:40 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:17.272 00:50:40 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:17.272 00:50:40 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:17.272 00:50:40 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.793 Waiting for block devices as requested 00:05:17.793 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.054 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.054 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:18.054 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:23.348 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:23.348 00:50:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.348 00:50:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:23.348 00:50:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:23.348 00:50:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.348 00:50:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.348 00:50:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:23.348 00:50:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:23.348 00:50:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:23.348 00:50:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:23.348 00:50:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:23.348 00:50:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:23.348 00:50:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.348 00:50:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.348 00:50:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.348 00:50:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.348 00:50:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.348 00:50:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:23.348 00:50:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.348 00:50:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.349 00:50:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1543 -- # continue 00:05:23.349 00:50:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.349 00:50:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:23.349 00:50:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.349 00:50:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:23.349 00:50:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.349 00:50:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:23.349 00:50:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:23.349 00:50:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:23.349 00:50:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.349 00:50:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.349 00:50:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.349 00:50:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1543 -- # continue 00:05:23.349 00:50:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.349 00:50:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:23.349 00:50:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.349 00:50:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:23.349 00:50:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.349 00:50:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.349 00:50:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.349 00:50:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1543 -- # continue 00:05:23.349 00:50:46 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:23.349 00:50:46 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:23.349 00:50:46 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:23.349 00:50:46 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:23.349 00:50:46 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:23.349 00:50:46 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:23.349 00:50:46 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:23.349 00:50:46 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:23.349 00:50:46 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:23.349 00:50:46 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:23.349 00:50:46 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:23.349 00:50:46 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:23.349 00:50:46 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:23.349 00:50:46 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:23.349 00:50:46 -- common/autotest_common.sh@1543 -- # continue 00:05:23.349 00:50:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:23.349 00:50:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.349 00:50:46 -- common/autotest_common.sh@10 -- # set +x 00:05:23.349 00:50:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:23.349 00:50:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.349 00:50:46 -- common/autotest_common.sh@10 -- # set +x 00:05:23.349 00:50:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.496 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.496 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.496 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.496 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:24.496 00:50:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:24.496 00:50:47 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.496 00:50:47 -- common/autotest_common.sh@10 -- # set +x 00:05:24.496 00:50:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:24.496 00:50:47 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:24.496 00:50:47 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:24.496 00:50:47 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:24.496 00:50:47 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:24.496 00:50:47 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:24.496 00:50:47 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:24.496 00:50:47 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:24.496 00:50:47 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:24.496 00:50:47 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:24.496 00:50:47 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.496 00:50:47 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.496 00:50:47 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:24.757 00:50:47 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:24.757 00:50:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:24.757 00:50:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:24.757 00:50:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:24.757 00:50:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:24.757 00:50:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:24.757 00:50:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:24.757 00:50:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:24.757 00:50:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:24.757 00:50:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:24.757 00:50:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:24.757 00:50:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:24.757 00:50:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:24.757 00:50:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:24.757 00:50:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:24.757 00:50:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:24.757 00:50:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:24.757 00:50:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:24.758 00:50:47 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:24.758 00:50:47 -- common/autotest_common.sh@1572 -- # return 0 00:05:24.758 00:50:47 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:24.758 00:50:47 -- common/autotest_common.sh@1580 -- # return 0 00:05:24.758 00:50:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:24.758 00:50:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:24.758 00:50:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:24.758 00:50:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:24.758 00:50:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:24.758 00:50:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.758 00:50:47 -- common/autotest_common.sh@10 -- # set +x 00:05:24.758 00:50:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:24.758 00:50:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:24.758 00:50:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.758 00:50:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.758 00:50:47 -- common/autotest_common.sh@10 -- # set +x 00:05:24.758 ************************************ 00:05:24.758 START TEST env 00:05:24.758 ************************************ 00:05:24.758 00:50:47 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:24.758 * Looking for test storage... 00:05:24.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:24.758 00:50:47 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.758 00:50:47 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.758 00:50:47 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.758 00:50:47 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.758 00:50:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.758 00:50:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.758 00:50:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.758 00:50:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.758 00:50:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.758 00:50:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.758 00:50:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.758 00:50:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.758 00:50:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.758 00:50:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.758 00:50:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.758 00:50:47 env -- scripts/common.sh@344 -- # case "$op" in 00:05:24.758 00:50:47 env -- scripts/common.sh@345 -- # : 1 00:05:24.758 00:50:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.758 00:50:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.758 00:50:47 env -- scripts/common.sh@365 -- # decimal 1 00:05:24.758 00:50:47 env -- scripts/common.sh@353 -- # local d=1 00:05:24.758 00:50:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.758 00:50:47 env -- scripts/common.sh@355 -- # echo 1 00:05:24.758 00:50:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.758 00:50:47 env -- scripts/common.sh@366 -- # decimal 2 00:05:25.019 00:50:47 env -- scripts/common.sh@353 -- # local d=2 00:05:25.019 00:50:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.019 00:50:47 env -- scripts/common.sh@355 -- # echo 2 00:05:25.019 00:50:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.019 00:50:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.019 00:50:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.019 00:50:47 env -- scripts/common.sh@368 -- # return 0 00:05:25.019 00:50:47 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.019 00:50:47 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.019 --rc genhtml_branch_coverage=1 00:05:25.019 --rc genhtml_function_coverage=1 00:05:25.019 --rc genhtml_legend=1 00:05:25.019 --rc geninfo_all_blocks=1 00:05:25.019 --rc geninfo_unexecuted_blocks=1 00:05:25.019 00:05:25.019 ' 00:05:25.019 00:50:47 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.019 --rc genhtml_branch_coverage=1 00:05:25.019 --rc genhtml_function_coverage=1 00:05:25.019 --rc genhtml_legend=1 00:05:25.019 --rc geninfo_all_blocks=1 00:05:25.019 --rc geninfo_unexecuted_blocks=1 00:05:25.019 00:05:25.019 ' 00:05:25.019 00:50:47 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.019 --rc genhtml_branch_coverage=1 00:05:25.019 --rc genhtml_function_coverage=1 00:05:25.019 --rc genhtml_legend=1 00:05:25.019 --rc geninfo_all_blocks=1 00:05:25.019 --rc geninfo_unexecuted_blocks=1 00:05:25.019 00:05:25.019 ' 00:05:25.019 00:50:47 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.019 --rc genhtml_branch_coverage=1 00:05:25.019 --rc genhtml_function_coverage=1 00:05:25.019 --rc genhtml_legend=1 00:05:25.019 --rc geninfo_all_blocks=1 00:05:25.019 --rc geninfo_unexecuted_blocks=1 00:05:25.019 00:05:25.019 ' 00:05:25.019 00:50:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:25.019 00:50:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.019 00:50:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.019 00:50:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.019 ************************************ 00:05:25.019 START TEST env_memory 00:05:25.019 ************************************ 00:05:25.019 00:50:47 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:25.019 00:05:25.019 00:05:25.019 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.019 http://cunit.sourceforge.net/ 00:05:25.019 00:05:25.019 00:05:25.019 Suite: memory 00:05:25.020 Test: alloc and free memory map ...[2024-11-26 00:50:47.746282] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:25.020 passed 00:05:25.020 Test: mem map translation ...[2024-11-26 00:50:47.785393] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:25.020 [2024-11-26 00:50:47.785546] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:25.020 [2024-11-26 00:50:47.786088] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:25.020 [2024-11-26 00:50:47.786217] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:25.020 passed 00:05:25.020 Test: mem map registration ...[2024-11-26 00:50:47.854635] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:25.020 [2024-11-26 00:50:47.854776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:25.020 passed 00:05:25.281 Test: mem map adjacent registrations ...passed 00:05:25.281 00:05:25.281 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.281 suites 1 1 n/a 0 0 00:05:25.281 tests 4 4 4 0 0 00:05:25.281 asserts 152 152 152 0 n/a 00:05:25.281 00:05:25.281 Elapsed time = 0.233 seconds 00:05:25.281 ************************************ 00:05:25.281 END TEST env_memory 00:05:25.281 ************************************ 00:05:25.281 00:05:25.281 real 0m0.276s 00:05:25.281 user 0m0.240s 00:05:25.281 sys 0m0.025s 00:05:25.281 00:50:47 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.281 00:50:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:25.281 00:50:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.281 00:50:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.281 00:50:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.281 00:50:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.281 ************************************ 00:05:25.281 START TEST env_vtophys 00:05:25.281 ************************************ 00:05:25.281 00:50:48 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:25.281 EAL: lib.eal log level changed from notice to debug 00:05:25.281 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.281 EAL: Detected lcore 1 as core 0 on socket 0 00:05:25.281 EAL: Detected lcore 2 as core 0 on socket 0 00:05:25.281 EAL: Detected lcore 3 as core 0 on socket 0 00:05:25.281 EAL: Detected lcore 4 as core 0 on socket 0 00:05:25.281 EAL: Detected lcore 5 as core 0 on socket 0 00:05:25.281 EAL: Detected lcore 6 as core 0 on socket 0 00:05:25.281 EAL: Detected lcore 7 as core 0 on socket 0 00:05:25.281 EAL: Detected lcore 8 as core 0 on socket 0 00:05:25.281 EAL: Detected lcore 9 as core 0 on socket 0 00:05:25.281 EAL: Maximum logical cores by configuration: 128 00:05:25.281 EAL: Detected CPU lcores: 10 00:05:25.281 EAL: Detected NUMA nodes: 1 00:05:25.281 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:25.281 EAL: Detected shared linkage of DPDK 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:25.281 EAL: Registered [vdev] bus. 00:05:25.281 EAL: bus.vdev log level changed from disabled to notice 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:25.281 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:25.281 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25.0 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25.0 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so 00:05:25.281 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so 00:05:25.281 EAL: No shared files mode enabled, IPC will be disabled 00:05:25.281 EAL: No shared files mode enabled, IPC is disabled 00:05:25.281 EAL: Selected IOVA mode 'PA' 00:05:25.281 EAL: Probing VFIO support... 00:05:25.281 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:25.281 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:25.281 EAL: Ask a virtual area of 0x2e000 bytes 00:05:25.281 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:25.281 EAL: Setting up physically contiguous memory... 00:05:25.281 EAL: Setting maximum number of open files to 524288 00:05:25.281 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:25.281 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:25.281 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.281 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:25.281 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.281 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.281 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:25.281 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:25.281 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.281 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:25.281 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.282 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.282 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:25.282 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:25.282 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.282 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:25.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.282 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.282 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:25.282 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:25.282 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.282 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:25.282 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.282 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.282 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:25.282 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:25.282 EAL: Hugepages will be freed exactly as allocated. 00:05:25.282 EAL: No shared files mode enabled, IPC is disabled 00:05:25.282 EAL: No shared files mode enabled, IPC is disabled 00:05:25.282 EAL: TSC frequency is ~2600000 KHz 00:05:25.543 EAL: Main lcore 0 is ready (tid=7f9056fb7a40;cpuset=[0]) 00:05:25.543 EAL: Trying to obtain current memory policy. 00:05:25.543 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.543 EAL: Restoring previous memory policy: 0 00:05:25.543 EAL: request: mp_malloc_sync 00:05:25.543 EAL: No shared files mode enabled, IPC is disabled 00:05:25.543 EAL: Heap on socket 0 was expanded by 2MB 00:05:25.543 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:05:25.543 EAL: No shared files mode enabled, IPC is disabled 00:05:25.543 EAL: Mem event callback 'spdk:(nil)' registered 00:05:25.543 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:25.543 00:05:25.543 00:05:25.543 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.543 http://cunit.sourceforge.net/ 00:05:25.543 00:05:25.543 00:05:25.543 Suite: components_suite 00:05:25.804 Test: vtophys_malloc_test ...passed 00:05:25.804 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:25.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.804 EAL: Restoring previous memory policy: 4 00:05:25.804 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.804 EAL: request: mp_malloc_sync 00:05:25.804 EAL: No shared files mode enabled, IPC is disabled 00:05:25.804 EAL: Heap on socket 0 was expanded by 4MB 00:05:25.804 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.804 EAL: request: mp_malloc_sync 00:05:25.804 EAL: No shared files mode enabled, IPC is disabled 00:05:25.804 EAL: Heap on socket 0 was shrunk by 4MB 00:05:25.804 EAL: Trying to obtain current memory policy. 00:05:25.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.804 EAL: Restoring previous memory policy: 4 00:05:25.804 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.804 EAL: request: mp_malloc_sync 00:05:25.804 EAL: No shared files mode enabled, IPC is disabled 00:05:25.804 EAL: Heap on socket 0 was expanded by 6MB 00:05:25.804 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.804 EAL: request: mp_malloc_sync 00:05:25.804 EAL: No shared files mode enabled, IPC is disabled 00:05:25.804 EAL: Heap on socket 0 was shrunk by 6MB 00:05:25.804 EAL: Trying to obtain current memory policy. 00:05:25.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.804 EAL: Restoring previous memory policy: 4 00:05:25.804 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.804 EAL: request: mp_malloc_sync 00:05:25.804 EAL: No shared files mode enabled, IPC is disabled 00:05:25.804 EAL: Heap on socket 0 was expanded by 10MB 00:05:25.804 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.804 EAL: request: mp_malloc_sync 00:05:25.804 EAL: No shared files mode enabled, IPC is disabled 00:05:25.804 EAL: Heap on socket 0 was shrunk by 10MB 00:05:25.804 EAL: Trying to obtain current memory policy. 00:05:25.804 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.804 EAL: Restoring previous memory policy: 4 00:05:25.804 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.805 EAL: request: mp_malloc_sync 00:05:25.805 EAL: No shared files mode enabled, IPC is disabled 00:05:25.805 EAL: Heap on socket 0 was expanded by 18MB 00:05:25.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.805 EAL: request: mp_malloc_sync 00:05:25.805 EAL: No shared files mode enabled, IPC is disabled 00:05:25.805 EAL: Heap on socket 0 was shrunk by 18MB 00:05:25.805 EAL: Trying to obtain current memory policy. 00:05:25.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.805 EAL: Restoring previous memory policy: 4 00:05:25.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.805 EAL: request: mp_malloc_sync 00:05:25.805 EAL: No shared files mode enabled, IPC is disabled 00:05:25.805 EAL: Heap on socket 0 was expanded by 34MB 00:05:25.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.805 EAL: request: mp_malloc_sync 00:05:25.805 EAL: No shared files mode enabled, IPC is disabled 00:05:25.805 EAL: Heap on socket 0 was shrunk by 34MB 00:05:25.805 EAL: Trying to obtain current memory policy. 00:05:25.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.805 EAL: Restoring previous memory policy: 4 00:05:25.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.805 EAL: request: mp_malloc_sync 00:05:25.805 EAL: No shared files mode enabled, IPC is disabled 00:05:25.805 EAL: Heap on socket 0 was expanded by 66MB 00:05:25.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.805 EAL: request: mp_malloc_sync 00:05:25.805 EAL: No shared files mode enabled, IPC is disabled 00:05:25.805 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.805 EAL: Trying to obtain current memory policy. 00:05:25.805 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.805 EAL: Restoring previous memory policy: 4 00:05:25.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.805 EAL: request: mp_malloc_sync 00:05:25.805 EAL: No shared files mode enabled, IPC is disabled 00:05:25.805 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.805 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.066 EAL: request: mp_malloc_sync 00:05:26.067 EAL: No shared files mode enabled, IPC is disabled 00:05:26.067 EAL: Heap on socket 0 was shrunk by 130MB 00:05:26.067 EAL: Trying to obtain current memory policy. 00:05:26.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.067 EAL: Restoring previous memory policy: 4 00:05:26.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.067 EAL: request: mp_malloc_sync 00:05:26.067 EAL: No shared files mode enabled, IPC is disabled 00:05:26.067 EAL: Heap on socket 0 was expanded by 258MB 00:05:26.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.067 EAL: request: mp_malloc_sync 00:05:26.067 EAL: No shared files mode enabled, IPC is disabled 00:05:26.067 EAL: Heap on socket 0 was shrunk by 258MB 00:05:26.067 EAL: Trying to obtain current memory policy. 00:05:26.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.328 EAL: Restoring previous memory policy: 4 00:05:26.328 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.328 EAL: request: mp_malloc_sync 00:05:26.328 EAL: No shared files mode enabled, IPC is disabled 00:05:26.328 EAL: Heap on socket 0 was expanded by 514MB 00:05:26.328 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.328 EAL: request: mp_malloc_sync 00:05:26.328 EAL: No shared files mode enabled, IPC is disabled 00:05:26.328 EAL: Heap on socket 0 was shrunk by 514MB 00:05:26.328 EAL: Trying to obtain current memory policy. 00:05:26.328 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.589 EAL: Restoring previous memory policy: 4 00:05:26.589 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.589 EAL: request: mp_malloc_sync 00:05:26.589 EAL: No shared files mode enabled, IPC is disabled 00:05:26.589 EAL: Heap on socket 0 was expanded by 1026MB 00:05:26.849 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.849 passed 00:05:26.849 00:05:26.849 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.849 suites 1 1 n/a 0 0 00:05:26.849 tests 2 2 2 0 0 00:05:26.849 asserts 5302 5302 5302 0 n/a 00:05:26.849 00:05:26.849 Elapsed time = 1.398 seconds 00:05:26.849 EAL: request: mp_malloc_sync 00:05:26.849 EAL: No shared files mode enabled, IPC is disabled 00:05:26.849 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:26.849 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.849 EAL: request: mp_malloc_sync 00:05:26.849 EAL: No shared files mode enabled, IPC is disabled 00:05:26.849 EAL: Heap on socket 0 was shrunk by 2MB 00:05:26.849 EAL: No shared files mode enabled, IPC is disabled 00:05:26.849 EAL: No shared files mode enabled, IPC is disabled 00:05:26.849 EAL: No shared files mode enabled, IPC is disabled 00:05:26.849 00:05:26.849 real 0m1.664s 00:05:26.849 user 0m0.656s 00:05:26.849 sys 0m0.860s 00:05:26.849 00:50:49 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.849 ************************************ 00:05:26.849 END TEST env_vtophys 00:05:26.849 ************************************ 00:05:26.849 00:50:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:26.849 00:50:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:26.849 00:50:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.849 00:50:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.849 00:50:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.849 ************************************ 00:05:26.849 START TEST env_pci 00:05:26.849 ************************************ 00:05:26.849 00:50:49 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:27.110 00:05:27.110 00:05:27.110 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.110 http://cunit.sourceforge.net/ 00:05:27.110 00:05:27.110 00:05:27.110 Suite: pci 00:05:27.110 Test: pci_hook ...[2024-11-26 00:50:49.776625] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70868 has claimed it 00:05:27.110 passed 00:05:27.110 00:05:27.110 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.110 suites 1 1 n/a 0 0 00:05:27.110 tests 1 1 1 0 0 00:05:27.110 asserts 25 25 25 0 n/a 00:05:27.110 00:05:27.110 Elapsed time = 0.005 seconds 00:05:27.110 EAL: Cannot find device (10000:00:01.0) 00:05:27.110 EAL: Failed to attach device on primary process 00:05:27.110 00:05:27.110 real 0m0.068s 00:05:27.110 user 0m0.031s 00:05:27.110 sys 0m0.035s 00:05:27.110 00:50:49 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.110 00:50:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:27.110 ************************************ 00:05:27.110 END TEST env_pci 00:05:27.110 ************************************ 00:05:27.110 00:50:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:27.110 00:50:49 env -- env/env.sh@15 -- # uname 00:05:27.110 00:50:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:27.110 00:50:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:27.110 00:50:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.110 00:50:49 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:27.110 00:50:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.110 00:50:49 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.110 ************************************ 00:05:27.110 START TEST env_dpdk_post_init 00:05:27.110 ************************************ 00:05:27.110 00:50:49 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.110 EAL: Detected CPU lcores: 10 00:05:27.110 EAL: Detected NUMA nodes: 1 00:05:27.110 EAL: Detected shared linkage of DPDK 00:05:27.110 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.110 EAL: Selected IOVA mode 'PA' 00:05:27.371 Starting DPDK initialization... 00:05:27.371 Starting SPDK post initialization... 00:05:27.371 SPDK NVMe probe 00:05:27.371 Attaching to 0000:00:10.0 00:05:27.371 Attaching to 0000:00:11.0 00:05:27.371 Attaching to 0000:00:12.0 00:05:27.371 Attaching to 0000:00:13.0 00:05:27.371 Attached to 0000:00:13.0 00:05:27.371 Attached to 0000:00:10.0 00:05:27.371 Attached to 0000:00:11.0 00:05:27.371 Attached to 0000:00:12.0 00:05:27.371 Cleaning up... 00:05:27.371 00:05:27.371 real 0m0.242s 00:05:27.371 user 0m0.075s 00:05:27.371 sys 0m0.070s 00:05:27.371 ************************************ 00:05:27.371 END TEST env_dpdk_post_init 00:05:27.371 ************************************ 00:05:27.371 00:50:50 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.371 00:50:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.371 00:50:50 env -- env/env.sh@26 -- # uname 00:05:27.371 00:50:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:27.371 00:50:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:27.371 00:50:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.371 00:50:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.371 00:50:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.371 ************************************ 00:05:27.371 START TEST env_mem_callbacks 00:05:27.371 ************************************ 00:05:27.371 00:50:50 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:27.371 EAL: Detected CPU lcores: 10 00:05:27.371 EAL: Detected NUMA nodes: 1 00:05:27.371 EAL: Detected shared linkage of DPDK 00:05:27.371 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.371 EAL: Selected IOVA mode 'PA' 00:05:27.631 00:05:27.631 00:05:27.631 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.631 http://cunit.sourceforge.net/ 00:05:27.631 00:05:27.631 00:05:27.631 Suite: memory 00:05:27.631 Test: test ... 00:05:27.631 register 0x200000200000 2097152 00:05:27.631 malloc 3145728 00:05:27.631 register 0x200000400000 4194304 00:05:27.631 buf 0x200000500000 len 3145728 PASSED 00:05:27.631 malloc 64 00:05:27.631 buf 0x2000004fff40 len 64 PASSED 00:05:27.631 malloc 4194304 00:05:27.631 register 0x200000800000 6291456 00:05:27.631 buf 0x200000a00000 len 4194304 PASSED 00:05:27.631 free 0x200000500000 3145728 00:05:27.631 free 0x2000004fff40 64 00:05:27.631 unregister 0x200000400000 4194304 PASSED 00:05:27.631 free 0x200000a00000 4194304 00:05:27.631 unregister 0x200000800000 6291456 PASSED 00:05:27.631 malloc 8388608 00:05:27.631 register 0x200000400000 10485760 00:05:27.631 buf 0x200000600000 len 8388608 PASSED 00:05:27.631 free 0x200000600000 8388608 00:05:27.631 unregister 0x200000400000 10485760 PASSED 00:05:27.631 passed 00:05:27.631 00:05:27.631 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.631 suites 1 1 n/a 0 0 00:05:27.631 tests 1 1 1 0 0 00:05:27.631 asserts 15 15 15 0 n/a 00:05:27.631 00:05:27.631 Elapsed time = 0.010 seconds 00:05:27.631 00:05:27.631 real 0m0.182s 00:05:27.631 user 0m0.033s 00:05:27.632 sys 0m0.047s 00:05:27.632 ************************************ 00:05:27.632 00:50:50 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.632 00:50:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:27.632 END TEST env_mem_callbacks 00:05:27.632 ************************************ 00:05:27.632 00:05:27.632 real 0m2.908s 00:05:27.632 user 0m1.188s 00:05:27.632 sys 0m1.259s 00:05:27.632 00:50:50 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.632 00:50:50 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.632 ************************************ 00:05:27.632 END TEST env 00:05:27.632 ************************************ 00:05:27.632 00:50:50 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:27.632 00:50:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.632 00:50:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.632 00:50:50 -- common/autotest_common.sh@10 -- # set +x 00:05:27.632 ************************************ 00:05:27.632 START TEST rpc 00:05:27.632 ************************************ 00:05:27.632 00:50:50 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:27.892 * Looking for test storage... 00:05:27.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:27.892 00:50:50 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:27.892 00:50:50 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:27.892 00:50:50 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:27.892 00:50:50 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:27.892 00:50:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.892 00:50:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.892 00:50:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.892 00:50:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.893 00:50:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.893 00:50:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.893 00:50:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.893 00:50:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.893 00:50:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.893 00:50:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.893 00:50:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.893 00:50:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:27.893 00:50:50 rpc -- scripts/common.sh@345 -- # : 1 00:05:27.893 00:50:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.893 00:50:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.893 00:50:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:27.893 00:50:50 rpc -- scripts/common.sh@353 -- # local d=1 00:05:27.893 00:50:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.893 00:50:50 rpc -- scripts/common.sh@355 -- # echo 1 00:05:27.893 00:50:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.893 00:50:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:27.893 00:50:50 rpc -- scripts/common.sh@353 -- # local d=2 00:05:27.893 00:50:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.893 00:50:50 rpc -- scripts/common.sh@355 -- # echo 2 00:05:27.893 00:50:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.893 00:50:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.893 00:50:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.893 00:50:50 rpc -- scripts/common.sh@368 -- # return 0 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:27.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.893 --rc genhtml_branch_coverage=1 00:05:27.893 --rc genhtml_function_coverage=1 00:05:27.893 --rc genhtml_legend=1 00:05:27.893 --rc geninfo_all_blocks=1 00:05:27.893 --rc geninfo_unexecuted_blocks=1 00:05:27.893 00:05:27.893 ' 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:27.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.893 --rc genhtml_branch_coverage=1 00:05:27.893 --rc genhtml_function_coverage=1 00:05:27.893 --rc genhtml_legend=1 00:05:27.893 --rc geninfo_all_blocks=1 00:05:27.893 --rc geninfo_unexecuted_blocks=1 00:05:27.893 00:05:27.893 ' 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:27.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.893 --rc genhtml_branch_coverage=1 00:05:27.893 --rc genhtml_function_coverage=1 00:05:27.893 --rc genhtml_legend=1 00:05:27.893 --rc geninfo_all_blocks=1 00:05:27.893 --rc geninfo_unexecuted_blocks=1 00:05:27.893 00:05:27.893 ' 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:27.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.893 --rc genhtml_branch_coverage=1 00:05:27.893 --rc genhtml_function_coverage=1 00:05:27.893 --rc genhtml_legend=1 00:05:27.893 --rc geninfo_all_blocks=1 00:05:27.893 --rc geninfo_unexecuted_blocks=1 00:05:27.893 00:05:27.893 ' 00:05:27.893 00:50:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70995 00:05:27.893 00:50:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:27.893 00:50:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.893 00:50:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70995 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@835 -- # '[' -z 70995 ']' 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.893 00:50:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.893 [2024-11-26 00:50:50.704457] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:27.893 [2024-11-26 00:50:50.704570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70995 ] 00:05:28.154 [2024-11-26 00:50:50.836914] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.154 [2024-11-26 00:50:50.865416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.154 [2024-11-26 00:50:50.883956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:28.154 [2024-11-26 00:50:50.883996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70995' to capture a snapshot of events at runtime. 00:05:28.154 [2024-11-26 00:50:50.884004] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.154 [2024-11-26 00:50:50.884014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.154 [2024-11-26 00:50:50.884021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70995 for offline analysis/debug. 00:05:28.154 [2024-11-26 00:50:50.884327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.727 00:50:51 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.727 00:50:51 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:28.727 00:50:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.727 00:50:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.727 00:50:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:28.727 00:50:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:28.727 00:50:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.727 00:50:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.727 00:50:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.727 ************************************ 00:05:28.727 START TEST rpc_integrity 00:05:28.727 ************************************ 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:28.727 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.727 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:28.727 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:28.727 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:28.727 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.727 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:28.727 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.727 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.727 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:28.727 { 00:05:28.727 "name": "Malloc0", 00:05:28.727 "aliases": [ 00:05:28.727 "3463dae2-95bf-460c-8918-225fbc754d94" 00:05:28.727 ], 00:05:28.727 "product_name": "Malloc disk", 00:05:28.727 "block_size": 512, 00:05:28.727 "num_blocks": 16384, 00:05:28.727 "uuid": "3463dae2-95bf-460c-8918-225fbc754d94", 00:05:28.727 "assigned_rate_limits": { 00:05:28.727 "rw_ios_per_sec": 0, 00:05:28.727 "rw_mbytes_per_sec": 0, 00:05:28.727 "r_mbytes_per_sec": 0, 00:05:28.727 "w_mbytes_per_sec": 0 00:05:28.727 }, 00:05:28.727 "claimed": false, 00:05:28.727 "zoned": false, 00:05:28.727 "supported_io_types": { 00:05:28.727 "read": true, 00:05:28.727 "write": true, 00:05:28.727 "unmap": true, 00:05:28.727 "flush": true, 00:05:28.727 "reset": true, 00:05:28.727 "nvme_admin": false, 00:05:28.727 "nvme_io": false, 00:05:28.727 "nvme_io_md": false, 00:05:28.727 "write_zeroes": true, 00:05:28.727 "zcopy": true, 00:05:28.727 "get_zone_info": false, 00:05:28.727 "zone_management": false, 00:05:28.727 "zone_append": false, 00:05:28.727 "compare": false, 00:05:28.727 "compare_and_write": false, 00:05:28.727 "abort": true, 00:05:28.727 "seek_hole": false, 00:05:28.727 "seek_data": false, 00:05:28.727 "copy": true, 00:05:28.727 "nvme_iov_md": false 00:05:28.727 }, 00:05:28.727 "memory_domains": [ 00:05:28.727 { 00:05:28.727 "dma_device_id": "system", 00:05:28.727 "dma_device_type": 1 00:05:28.727 }, 00:05:28.727 { 00:05:28.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.727 "dma_device_type": 2 00:05:28.727 } 00:05:28.727 ], 00:05:28.727 "driver_specific": {} 00:05:28.727 } 00:05:28.727 ]' 00:05:28.727 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.987 [2024-11-26 00:50:51.667572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:28.987 [2024-11-26 00:50:51.667624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:28.987 [2024-11-26 00:50:51.667643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:28.987 [2024-11-26 00:50:51.667653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:28.987 [2024-11-26 00:50:51.669914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:28.987 [2024-11-26 00:50:51.669946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:28.987 Passthru0 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:28.987 { 00:05:28.987 "name": "Malloc0", 00:05:28.987 "aliases": [ 00:05:28.987 "3463dae2-95bf-460c-8918-225fbc754d94" 00:05:28.987 ], 00:05:28.987 "product_name": "Malloc disk", 00:05:28.987 "block_size": 512, 00:05:28.987 "num_blocks": 16384, 00:05:28.987 "uuid": "3463dae2-95bf-460c-8918-225fbc754d94", 00:05:28.987 "assigned_rate_limits": { 00:05:28.987 "rw_ios_per_sec": 0, 00:05:28.987 "rw_mbytes_per_sec": 0, 00:05:28.987 "r_mbytes_per_sec": 0, 00:05:28.987 "w_mbytes_per_sec": 0 00:05:28.987 }, 00:05:28.987 "claimed": true, 00:05:28.987 "claim_type": "exclusive_write", 00:05:28.987 "zoned": false, 00:05:28.987 "supported_io_types": { 00:05:28.987 "read": true, 00:05:28.987 "write": true, 00:05:28.987 "unmap": true, 00:05:28.987 "flush": true, 00:05:28.987 "reset": true, 00:05:28.987 "nvme_admin": false, 00:05:28.987 "nvme_io": false, 00:05:28.987 "nvme_io_md": false, 00:05:28.987 "write_zeroes": true, 00:05:28.987 "zcopy": true, 00:05:28.987 "get_zone_info": false, 00:05:28.987 "zone_management": false, 00:05:28.987 "zone_append": false, 00:05:28.987 "compare": false, 00:05:28.987 "compare_and_write": false, 00:05:28.987 "abort": true, 00:05:28.987 "seek_hole": false, 00:05:28.987 "seek_data": false, 00:05:28.987 "copy": true, 00:05:28.987 "nvme_iov_md": false 00:05:28.987 }, 00:05:28.987 "memory_domains": [ 00:05:28.987 { 00:05:28.987 "dma_device_id": "system", 00:05:28.987 "dma_device_type": 1 00:05:28.987 }, 00:05:28.987 { 00:05:28.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.987 "dma_device_type": 2 00:05:28.987 } 00:05:28.987 ], 00:05:28.987 "driver_specific": {} 00:05:28.987 }, 00:05:28.987 { 00:05:28.987 "name": "Passthru0", 00:05:28.987 "aliases": [ 00:05:28.987 "eb0fc8e1-2937-5c4b-8c5e-8629f1e80379" 00:05:28.987 ], 00:05:28.987 "product_name": "passthru", 00:05:28.987 "block_size": 512, 00:05:28.987 "num_blocks": 16384, 00:05:28.987 "uuid": "eb0fc8e1-2937-5c4b-8c5e-8629f1e80379", 00:05:28.987 "assigned_rate_limits": { 00:05:28.987 "rw_ios_per_sec": 0, 00:05:28.987 "rw_mbytes_per_sec": 0, 00:05:28.987 "r_mbytes_per_sec": 0, 00:05:28.987 "w_mbytes_per_sec": 0 00:05:28.987 }, 00:05:28.987 "claimed": false, 00:05:28.987 "zoned": false, 00:05:28.987 "supported_io_types": { 00:05:28.987 "read": true, 00:05:28.987 "write": true, 00:05:28.987 "unmap": true, 00:05:28.987 "flush": true, 00:05:28.987 "reset": true, 00:05:28.987 "nvme_admin": false, 00:05:28.987 "nvme_io": false, 00:05:28.987 "nvme_io_md": false, 00:05:28.987 "write_zeroes": true, 00:05:28.987 "zcopy": true, 00:05:28.987 "get_zone_info": false, 00:05:28.987 "zone_management": false, 00:05:28.987 "zone_append": false, 00:05:28.987 "compare": false, 00:05:28.987 "compare_and_write": false, 00:05:28.987 "abort": true, 00:05:28.987 "seek_hole": false, 00:05:28.987 "seek_data": false, 00:05:28.987 "copy": true, 00:05:28.987 "nvme_iov_md": false 00:05:28.987 }, 00:05:28.987 "memory_domains": [ 00:05:28.987 { 00:05:28.987 "dma_device_id": "system", 00:05:28.987 "dma_device_type": 1 00:05:28.987 }, 00:05:28.987 { 00:05:28.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.987 "dma_device_type": 2 00:05:28.987 } 00:05:28.987 ], 00:05:28.987 "driver_specific": { 00:05:28.987 "passthru": { 00:05:28.987 "name": "Passthru0", 00:05:28.987 "base_bdev_name": "Malloc0" 00:05:28.987 } 00:05:28.987 } 00:05:28.987 } 00:05:28.987 ]' 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:28.987 00:50:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:28.987 00:05:28.987 real 0m0.235s 00:05:28.987 user 0m0.138s 00:05:28.987 sys 0m0.034s 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.987 ************************************ 00:05:28.987 END TEST rpc_integrity 00:05:28.987 ************************************ 00:05:28.987 00:50:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:28.987 00:50:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:28.987 00:50:51 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.987 00:50:51 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.987 00:50:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.987 ************************************ 00:05:28.987 START TEST rpc_plugins 00:05:28.987 ************************************ 00:05:28.987 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:28.987 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:28.987 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.987 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:28.987 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.987 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:28.987 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:28.987 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.987 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:28.987 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.987 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:28.987 { 00:05:28.987 "name": "Malloc1", 00:05:28.987 "aliases": [ 00:05:28.987 "7c6106d6-e1fd-458f-bb6a-7c1501ceae9b" 00:05:28.987 ], 00:05:28.987 "product_name": "Malloc disk", 00:05:28.987 "block_size": 4096, 00:05:28.987 "num_blocks": 256, 00:05:28.987 "uuid": "7c6106d6-e1fd-458f-bb6a-7c1501ceae9b", 00:05:28.987 "assigned_rate_limits": { 00:05:28.987 "rw_ios_per_sec": 0, 00:05:28.987 "rw_mbytes_per_sec": 0, 00:05:28.987 "r_mbytes_per_sec": 0, 00:05:28.987 "w_mbytes_per_sec": 0 00:05:28.987 }, 00:05:28.987 "claimed": false, 00:05:28.987 "zoned": false, 00:05:28.987 "supported_io_types": { 00:05:28.987 "read": true, 00:05:28.987 "write": true, 00:05:28.987 "unmap": true, 00:05:28.987 "flush": true, 00:05:28.987 "reset": true, 00:05:28.987 "nvme_admin": false, 00:05:28.987 "nvme_io": false, 00:05:28.987 "nvme_io_md": false, 00:05:28.987 "write_zeroes": true, 00:05:28.987 "zcopy": true, 00:05:28.987 "get_zone_info": false, 00:05:28.987 "zone_management": false, 00:05:28.987 "zone_append": false, 00:05:28.987 "compare": false, 00:05:28.987 "compare_and_write": false, 00:05:28.987 "abort": true, 00:05:28.987 "seek_hole": false, 00:05:28.987 "seek_data": false, 00:05:28.987 "copy": true, 00:05:28.987 "nvme_iov_md": false 00:05:28.987 }, 00:05:28.987 "memory_domains": [ 00:05:28.987 { 00:05:28.987 "dma_device_id": "system", 00:05:28.987 "dma_device_type": 1 00:05:28.987 }, 00:05:28.987 { 00:05:28.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:28.987 "dma_device_type": 2 00:05:28.987 } 00:05:28.987 ], 00:05:28.987 "driver_specific": {} 00:05:28.987 } 00:05:28.987 ]' 00:05:28.987 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:29.249 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:29.249 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:29.249 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.249 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.249 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.249 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:29.249 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.249 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.249 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.249 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:29.249 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:29.249 00:50:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:29.249 00:05:29.249 real 0m0.119s 00:05:29.249 user 0m0.064s 00:05:29.249 sys 0m0.018s 00:05:29.249 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.249 ************************************ 00:05:29.249 END TEST rpc_plugins 00:05:29.249 ************************************ 00:05:29.249 00:50:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.249 00:50:52 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:29.249 00:50:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.249 00:50:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.249 00:50:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.249 ************************************ 00:05:29.249 START TEST rpc_trace_cmd_test 00:05:29.249 ************************************ 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:29.249 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70995", 00:05:29.249 "tpoint_group_mask": "0x8", 00:05:29.249 "iscsi_conn": { 00:05:29.249 "mask": "0x2", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "scsi": { 00:05:29.249 "mask": "0x4", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "bdev": { 00:05:29.249 "mask": "0x8", 00:05:29.249 "tpoint_mask": "0xffffffffffffffff" 00:05:29.249 }, 00:05:29.249 "nvmf_rdma": { 00:05:29.249 "mask": "0x10", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "nvmf_tcp": { 00:05:29.249 "mask": "0x20", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "ftl": { 00:05:29.249 "mask": "0x40", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "blobfs": { 00:05:29.249 "mask": "0x80", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "dsa": { 00:05:29.249 "mask": "0x200", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "thread": { 00:05:29.249 "mask": "0x400", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "nvme_pcie": { 00:05:29.249 "mask": "0x800", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "iaa": { 00:05:29.249 "mask": "0x1000", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "nvme_tcp": { 00:05:29.249 "mask": "0x2000", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "bdev_nvme": { 00:05:29.249 "mask": "0x4000", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "sock": { 00:05:29.249 "mask": "0x8000", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "blob": { 00:05:29.249 "mask": "0x10000", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "bdev_raid": { 00:05:29.249 "mask": "0x20000", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 }, 00:05:29.249 "scheduler": { 00:05:29.249 "mask": "0x40000", 00:05:29.249 "tpoint_mask": "0x0" 00:05:29.249 } 00:05:29.249 }' 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:29.249 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:29.511 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:29.511 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:29.511 00:50:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:29.511 00:05:29.511 real 0m0.179s 00:05:29.511 user 0m0.138s 00:05:29.511 sys 0m0.019s 00:05:29.511 00:50:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.511 00:50:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.511 ************************************ 00:05:29.511 END TEST rpc_trace_cmd_test 00:05:29.511 ************************************ 00:05:29.511 00:50:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:29.511 00:50:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:29.511 00:50:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:29.511 00:50:52 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.511 00:50:52 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.511 00:50:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.511 ************************************ 00:05:29.511 START TEST rpc_daemon_integrity 00:05:29.511 ************************************ 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.511 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.511 { 00:05:29.511 "name": "Malloc2", 00:05:29.511 "aliases": [ 00:05:29.511 "70dbdf32-3bbb-4323-b483-6eaad43a6e0b" 00:05:29.511 ], 00:05:29.511 "product_name": "Malloc disk", 00:05:29.511 "block_size": 512, 00:05:29.511 "num_blocks": 16384, 00:05:29.511 "uuid": "70dbdf32-3bbb-4323-b483-6eaad43a6e0b", 00:05:29.511 "assigned_rate_limits": { 00:05:29.511 "rw_ios_per_sec": 0, 00:05:29.511 "rw_mbytes_per_sec": 0, 00:05:29.511 "r_mbytes_per_sec": 0, 00:05:29.511 "w_mbytes_per_sec": 0 00:05:29.511 }, 00:05:29.511 "claimed": false, 00:05:29.511 "zoned": false, 00:05:29.511 "supported_io_types": { 00:05:29.511 "read": true, 00:05:29.511 "write": true, 00:05:29.511 "unmap": true, 00:05:29.511 "flush": true, 00:05:29.511 "reset": true, 00:05:29.511 "nvme_admin": false, 00:05:29.511 "nvme_io": false, 00:05:29.511 "nvme_io_md": false, 00:05:29.511 "write_zeroes": true, 00:05:29.511 "zcopy": true, 00:05:29.511 "get_zone_info": false, 00:05:29.511 "zone_management": false, 00:05:29.511 "zone_append": false, 00:05:29.511 "compare": false, 00:05:29.512 "compare_and_write": false, 00:05:29.512 "abort": true, 00:05:29.512 "seek_hole": false, 00:05:29.512 "seek_data": false, 00:05:29.512 "copy": true, 00:05:29.512 "nvme_iov_md": false 00:05:29.512 }, 00:05:29.512 "memory_domains": [ 00:05:29.512 { 00:05:29.512 "dma_device_id": "system", 00:05:29.512 "dma_device_type": 1 00:05:29.512 }, 00:05:29.512 { 00:05:29.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.512 "dma_device_type": 2 00:05:29.512 } 00:05:29.512 ], 00:05:29.512 "driver_specific": {} 00:05:29.512 } 00:05:29.512 ]' 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.512 [2024-11-26 00:50:52.367987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:29.512 [2024-11-26 00:50:52.368039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.512 [2024-11-26 00:50:52.368058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:29.512 [2024-11-26 00:50:52.368068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.512 [2024-11-26 00:50:52.370246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.512 [2024-11-26 00:50:52.370282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.512 Passthru0 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.512 { 00:05:29.512 "name": "Malloc2", 00:05:29.512 "aliases": [ 00:05:29.512 "70dbdf32-3bbb-4323-b483-6eaad43a6e0b" 00:05:29.512 ], 00:05:29.512 "product_name": "Malloc disk", 00:05:29.512 "block_size": 512, 00:05:29.512 "num_blocks": 16384, 00:05:29.512 "uuid": "70dbdf32-3bbb-4323-b483-6eaad43a6e0b", 00:05:29.512 "assigned_rate_limits": { 00:05:29.512 "rw_ios_per_sec": 0, 00:05:29.512 "rw_mbytes_per_sec": 0, 00:05:29.512 "r_mbytes_per_sec": 0, 00:05:29.512 "w_mbytes_per_sec": 0 00:05:29.512 }, 00:05:29.512 "claimed": true, 00:05:29.512 "claim_type": "exclusive_write", 00:05:29.512 "zoned": false, 00:05:29.512 "supported_io_types": { 00:05:29.512 "read": true, 00:05:29.512 "write": true, 00:05:29.512 "unmap": true, 00:05:29.512 "flush": true, 00:05:29.512 "reset": true, 00:05:29.512 "nvme_admin": false, 00:05:29.512 "nvme_io": false, 00:05:29.512 "nvme_io_md": false, 00:05:29.512 "write_zeroes": true, 00:05:29.512 "zcopy": true, 00:05:29.512 "get_zone_info": false, 00:05:29.512 "zone_management": false, 00:05:29.512 "zone_append": false, 00:05:29.512 "compare": false, 00:05:29.512 "compare_and_write": false, 00:05:29.512 "abort": true, 00:05:29.512 "seek_hole": false, 00:05:29.512 "seek_data": false, 00:05:29.512 "copy": true, 00:05:29.512 "nvme_iov_md": false 00:05:29.512 }, 00:05:29.512 "memory_domains": [ 00:05:29.512 { 00:05:29.512 "dma_device_id": "system", 00:05:29.512 "dma_device_type": 1 00:05:29.512 }, 00:05:29.512 { 00:05:29.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.512 "dma_device_type": 2 00:05:29.512 } 00:05:29.512 ], 00:05:29.512 "driver_specific": {} 00:05:29.512 }, 00:05:29.512 { 00:05:29.512 "name": "Passthru0", 00:05:29.512 "aliases": [ 00:05:29.512 "423811aa-1d7f-571f-a8ea-b35422530608" 00:05:29.512 ], 00:05:29.512 "product_name": "passthru", 00:05:29.512 "block_size": 512, 00:05:29.512 "num_blocks": 16384, 00:05:29.512 "uuid": "423811aa-1d7f-571f-a8ea-b35422530608", 00:05:29.512 "assigned_rate_limits": { 00:05:29.512 "rw_ios_per_sec": 0, 00:05:29.512 "rw_mbytes_per_sec": 0, 00:05:29.512 "r_mbytes_per_sec": 0, 00:05:29.512 "w_mbytes_per_sec": 0 00:05:29.512 }, 00:05:29.512 "claimed": false, 00:05:29.512 "zoned": false, 00:05:29.512 "supported_io_types": { 00:05:29.512 "read": true, 00:05:29.512 "write": true, 00:05:29.512 "unmap": true, 00:05:29.512 "flush": true, 00:05:29.512 "reset": true, 00:05:29.512 "nvme_admin": false, 00:05:29.512 "nvme_io": false, 00:05:29.512 "nvme_io_md": false, 00:05:29.512 "write_zeroes": true, 00:05:29.512 "zcopy": true, 00:05:29.512 "get_zone_info": false, 00:05:29.512 "zone_management": false, 00:05:29.512 "zone_append": false, 00:05:29.512 "compare": false, 00:05:29.512 "compare_and_write": false, 00:05:29.512 "abort": true, 00:05:29.512 "seek_hole": false, 00:05:29.512 "seek_data": false, 00:05:29.512 "copy": true, 00:05:29.512 "nvme_iov_md": false 00:05:29.512 }, 00:05:29.512 "memory_domains": [ 00:05:29.512 { 00:05:29.512 "dma_device_id": "system", 00:05:29.512 "dma_device_type": 1 00:05:29.512 }, 00:05:29.512 { 00:05:29.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.512 "dma_device_type": 2 00:05:29.512 } 00:05:29.512 ], 00:05:29.512 "driver_specific": { 00:05:29.512 "passthru": { 00:05:29.512 "name": "Passthru0", 00:05:29.512 "base_bdev_name": "Malloc2" 00:05:29.512 } 00:05:29.512 } 00:05:29.512 } 00:05:29.512 ]' 00:05:29.512 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.773 ************************************ 00:05:29.773 END TEST rpc_daemon_integrity 00:05:29.773 ************************************ 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.773 00:05:29.773 real 0m0.230s 00:05:29.773 user 0m0.131s 00:05:29.773 sys 0m0.032s 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.773 00:50:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.773 00:50:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:29.773 00:50:52 rpc -- rpc/rpc.sh@84 -- # killprocess 70995 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@954 -- # '[' -z 70995 ']' 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@958 -- # kill -0 70995 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@959 -- # uname 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70995 00:05:29.773 killing process with pid 70995 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70995' 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@973 -- # kill 70995 00:05:29.773 00:50:52 rpc -- common/autotest_common.sh@978 -- # wait 70995 00:05:30.035 ************************************ 00:05:30.035 END TEST rpc 00:05:30.035 ************************************ 00:05:30.035 00:05:30.035 real 0m2.329s 00:05:30.035 user 0m2.781s 00:05:30.035 sys 0m0.579s 00:05:30.035 00:50:52 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.035 00:50:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.035 00:50:52 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:30.035 00:50:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.035 00:50:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.035 00:50:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.035 ************************************ 00:05:30.035 START TEST skip_rpc 00:05:30.035 ************************************ 00:05:30.035 00:50:52 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:30.035 * Looking for test storage... 00:05:30.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.035 00:50:52 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.035 00:50:52 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.035 00:50:52 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.296 00:50:52 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.296 00:50:53 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.296 00:50:53 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.296 00:50:53 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.296 00:50:53 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.296 00:50:53 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.296 00:50:53 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.296 00:50:53 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.297 00:50:53 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:30.297 00:50:53 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.297 00:50:53 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.297 --rc genhtml_branch_coverage=1 00:05:30.297 --rc genhtml_function_coverage=1 00:05:30.297 --rc genhtml_legend=1 00:05:30.297 --rc geninfo_all_blocks=1 00:05:30.297 --rc geninfo_unexecuted_blocks=1 00:05:30.297 00:05:30.297 ' 00:05:30.297 00:50:53 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.297 --rc genhtml_branch_coverage=1 00:05:30.297 --rc genhtml_function_coverage=1 00:05:30.297 --rc genhtml_legend=1 00:05:30.297 --rc geninfo_all_blocks=1 00:05:30.297 --rc geninfo_unexecuted_blocks=1 00:05:30.297 00:05:30.297 ' 00:05:30.297 00:50:53 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.297 --rc genhtml_branch_coverage=1 00:05:30.297 --rc genhtml_function_coverage=1 00:05:30.297 --rc genhtml_legend=1 00:05:30.297 --rc geninfo_all_blocks=1 00:05:30.297 --rc geninfo_unexecuted_blocks=1 00:05:30.297 00:05:30.297 ' 00:05:30.297 00:50:53 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.297 --rc genhtml_branch_coverage=1 00:05:30.297 --rc genhtml_function_coverage=1 00:05:30.297 --rc genhtml_legend=1 00:05:30.297 --rc geninfo_all_blocks=1 00:05:30.297 --rc geninfo_unexecuted_blocks=1 00:05:30.297 00:05:30.297 ' 00:05:30.297 00:50:53 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:30.297 00:50:53 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.297 00:50:53 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:30.297 00:50:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.297 00:50:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.297 00:50:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.297 ************************************ 00:05:30.297 START TEST skip_rpc 00:05:30.297 ************************************ 00:05:30.297 00:50:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:30.297 00:50:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=71197 00:05:30.297 00:50:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.297 00:50:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:30.297 00:50:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:30.297 [2024-11-26 00:50:53.104927] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:30.297 [2024-11-26 00:50:53.105034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71197 ] 00:05:30.557 [2024-11-26 00:50:53.236459] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.557 [2024-11-26 00:50:53.265476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.557 [2024-11-26 00:50:53.284474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 71197 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 71197 ']' 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 71197 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71197 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.846 killing process with pid 71197 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71197' 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 71197 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 71197 00:05:35.846 00:05:35.846 real 0m5.256s 00:05:35.846 user 0m4.900s 00:05:35.846 sys 0m0.257s 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.846 00:50:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.846 ************************************ 00:05:35.846 END TEST skip_rpc 00:05:35.846 ************************************ 00:05:35.846 00:50:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:35.846 00:50:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.846 00:50:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.846 00:50:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.846 ************************************ 00:05:35.846 START TEST skip_rpc_with_json 00:05:35.846 ************************************ 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71284 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71284 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 71284 ']' 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.846 00:50:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.846 [2024-11-26 00:50:58.406768] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:35.846 [2024-11-26 00:50:58.406896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71284 ] 00:05:35.846 [2024-11-26 00:50:58.539401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:35.846 [2024-11-26 00:50:58.564117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.846 [2024-11-26 00:50:58.580950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.418 [2024-11-26 00:50:59.235815] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:36.418 request: 00:05:36.418 { 00:05:36.418 "trtype": "tcp", 00:05:36.418 "method": "nvmf_get_transports", 00:05:36.418 "req_id": 1 00:05:36.418 } 00:05:36.418 Got JSON-RPC error response 00:05:36.418 response: 00:05:36.418 { 00:05:36.418 "code": -19, 00:05:36.418 "message": "No such device" 00:05:36.418 } 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.418 [2024-11-26 00:50:59.247888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.418 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.676 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.676 00:50:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:36.676 { 00:05:36.676 "subsystems": [ 00:05:36.676 { 00:05:36.676 "subsystem": "fsdev", 00:05:36.676 "config": [ 00:05:36.676 { 00:05:36.676 "method": "fsdev_set_opts", 00:05:36.676 "params": { 00:05:36.676 "fsdev_io_pool_size": 65535, 00:05:36.676 "fsdev_io_cache_size": 256 00:05:36.676 } 00:05:36.676 } 00:05:36.676 ] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "keyring", 00:05:36.676 "config": [] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "iobuf", 00:05:36.676 "config": [ 00:05:36.676 { 00:05:36.676 "method": "iobuf_set_options", 00:05:36.676 "params": { 00:05:36.676 "small_pool_count": 8192, 00:05:36.676 "large_pool_count": 1024, 00:05:36.676 "small_bufsize": 8192, 00:05:36.676 "large_bufsize": 135168, 00:05:36.676 "enable_numa": false 00:05:36.676 } 00:05:36.676 } 00:05:36.676 ] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "sock", 00:05:36.676 "config": [ 00:05:36.676 { 00:05:36.676 "method": "sock_set_default_impl", 00:05:36.676 "params": { 00:05:36.676 "impl_name": "posix" 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "sock_impl_set_options", 00:05:36.676 "params": { 00:05:36.676 "impl_name": "ssl", 00:05:36.676 "recv_buf_size": 4096, 00:05:36.676 "send_buf_size": 4096, 00:05:36.676 "enable_recv_pipe": true, 00:05:36.676 "enable_quickack": false, 00:05:36.676 "enable_placement_id": 0, 00:05:36.676 "enable_zerocopy_send_server": true, 00:05:36.676 "enable_zerocopy_send_client": false, 00:05:36.676 "zerocopy_threshold": 0, 00:05:36.676 "tls_version": 0, 00:05:36.676 "enable_ktls": false 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "sock_impl_set_options", 00:05:36.676 "params": { 00:05:36.676 "impl_name": "posix", 00:05:36.676 "recv_buf_size": 2097152, 00:05:36.676 "send_buf_size": 2097152, 00:05:36.676 "enable_recv_pipe": true, 00:05:36.676 "enable_quickack": false, 00:05:36.676 "enable_placement_id": 0, 00:05:36.676 "enable_zerocopy_send_server": true, 00:05:36.676 "enable_zerocopy_send_client": false, 00:05:36.676 "zerocopy_threshold": 0, 00:05:36.676 "tls_version": 0, 00:05:36.676 "enable_ktls": false 00:05:36.676 } 00:05:36.676 } 00:05:36.676 ] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "vmd", 00:05:36.676 "config": [] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "accel", 00:05:36.676 "config": [ 00:05:36.676 { 00:05:36.676 "method": "accel_set_options", 00:05:36.676 "params": { 00:05:36.676 "small_cache_size": 128, 00:05:36.676 "large_cache_size": 16, 00:05:36.676 "task_count": 2048, 00:05:36.676 "sequence_count": 2048, 00:05:36.676 "buf_count": 2048 00:05:36.676 } 00:05:36.676 } 00:05:36.676 ] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "bdev", 00:05:36.676 "config": [ 00:05:36.676 { 00:05:36.676 "method": "bdev_set_options", 00:05:36.676 "params": { 00:05:36.676 "bdev_io_pool_size": 65535, 00:05:36.676 "bdev_io_cache_size": 256, 00:05:36.676 "bdev_auto_examine": true, 00:05:36.676 "iobuf_small_cache_size": 128, 00:05:36.676 "iobuf_large_cache_size": 16 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "bdev_raid_set_options", 00:05:36.676 "params": { 00:05:36.676 "process_window_size_kb": 1024, 00:05:36.676 "process_max_bandwidth_mb_sec": 0 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "bdev_iscsi_set_options", 00:05:36.676 "params": { 00:05:36.676 "timeout_sec": 30 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "bdev_nvme_set_options", 00:05:36.676 "params": { 00:05:36.676 "action_on_timeout": "none", 00:05:36.676 "timeout_us": 0, 00:05:36.676 "timeout_admin_us": 0, 00:05:36.676 "keep_alive_timeout_ms": 10000, 00:05:36.676 "arbitration_burst": 0, 00:05:36.676 "low_priority_weight": 0, 00:05:36.676 "medium_priority_weight": 0, 00:05:36.676 "high_priority_weight": 0, 00:05:36.676 "nvme_adminq_poll_period_us": 10000, 00:05:36.676 "nvme_ioq_poll_period_us": 0, 00:05:36.676 "io_queue_requests": 0, 00:05:36.676 "delay_cmd_submit": true, 00:05:36.676 "transport_retry_count": 4, 00:05:36.676 "bdev_retry_count": 3, 00:05:36.676 "transport_ack_timeout": 0, 00:05:36.676 "ctrlr_loss_timeout_sec": 0, 00:05:36.676 "reconnect_delay_sec": 0, 00:05:36.676 "fast_io_fail_timeout_sec": 0, 00:05:36.676 "disable_auto_failback": false, 00:05:36.676 "generate_uuids": false, 00:05:36.676 "transport_tos": 0, 00:05:36.676 "nvme_error_stat": false, 00:05:36.676 "rdma_srq_size": 0, 00:05:36.676 "io_path_stat": false, 00:05:36.676 "allow_accel_sequence": false, 00:05:36.676 "rdma_max_cq_size": 0, 00:05:36.676 "rdma_cm_event_timeout_ms": 0, 00:05:36.676 "dhchap_digests": [ 00:05:36.676 "sha256", 00:05:36.676 "sha384", 00:05:36.676 "sha512" 00:05:36.676 ], 00:05:36.676 "dhchap_dhgroups": [ 00:05:36.676 "null", 00:05:36.676 "ffdhe2048", 00:05:36.676 "ffdhe3072", 00:05:36.676 "ffdhe4096", 00:05:36.676 "ffdhe6144", 00:05:36.676 "ffdhe8192" 00:05:36.676 ] 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "bdev_nvme_set_hotplug", 00:05:36.676 "params": { 00:05:36.676 "period_us": 100000, 00:05:36.676 "enable": false 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "bdev_wait_for_examine" 00:05:36.676 } 00:05:36.676 ] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "scsi", 00:05:36.676 "config": null 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "scheduler", 00:05:36.676 "config": [ 00:05:36.676 { 00:05:36.676 "method": "framework_set_scheduler", 00:05:36.676 "params": { 00:05:36.676 "name": "static" 00:05:36.676 } 00:05:36.676 } 00:05:36.676 ] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "vhost_scsi", 00:05:36.676 "config": [] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "vhost_blk", 00:05:36.676 "config": [] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "ublk", 00:05:36.676 "config": [] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "nbd", 00:05:36.676 "config": [] 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "subsystem": "nvmf", 00:05:36.676 "config": [ 00:05:36.676 { 00:05:36.676 "method": "nvmf_set_config", 00:05:36.676 "params": { 00:05:36.676 "discovery_filter": "match_any", 00:05:36.676 "admin_cmd_passthru": { 00:05:36.676 "identify_ctrlr": false 00:05:36.676 }, 00:05:36.676 "dhchap_digests": [ 00:05:36.676 "sha256", 00:05:36.676 "sha384", 00:05:36.676 "sha512" 00:05:36.676 ], 00:05:36.676 "dhchap_dhgroups": [ 00:05:36.676 "null", 00:05:36.676 "ffdhe2048", 00:05:36.676 "ffdhe3072", 00:05:36.676 "ffdhe4096", 00:05:36.676 "ffdhe6144", 00:05:36.676 "ffdhe8192" 00:05:36.676 ] 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "nvmf_set_max_subsystems", 00:05:36.676 "params": { 00:05:36.676 "max_subsystems": 1024 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "nvmf_set_crdt", 00:05:36.676 "params": { 00:05:36.676 "crdt1": 0, 00:05:36.676 "crdt2": 0, 00:05:36.676 "crdt3": 0 00:05:36.676 } 00:05:36.676 }, 00:05:36.676 { 00:05:36.676 "method": "nvmf_create_transport", 00:05:36.676 "params": { 00:05:36.676 "trtype": "TCP", 00:05:36.676 "max_queue_depth": 128, 00:05:36.676 "max_io_qpairs_per_ctrlr": 127, 00:05:36.676 "in_capsule_data_size": 4096, 00:05:36.676 "max_io_size": 131072, 00:05:36.676 "io_unit_size": 131072, 00:05:36.676 "max_aq_depth": 128, 00:05:36.676 "num_shared_buffers": 511, 00:05:36.676 "buf_cache_size": 4294967295, 00:05:36.677 "dif_insert_or_strip": false, 00:05:36.677 "zcopy": false, 00:05:36.677 "c2h_success": true, 00:05:36.677 "sock_priority": 0, 00:05:36.677 "abort_timeout_sec": 1, 00:05:36.677 "ack_timeout": 0, 00:05:36.677 "data_wr_pool_size": 0 00:05:36.677 } 00:05:36.677 } 00:05:36.677 ] 00:05:36.677 }, 00:05:36.677 { 00:05:36.677 "subsystem": "iscsi", 00:05:36.677 "config": [ 00:05:36.677 { 00:05:36.677 "method": "iscsi_set_options", 00:05:36.677 "params": { 00:05:36.677 "node_base": "iqn.2016-06.io.spdk", 00:05:36.677 "max_sessions": 128, 00:05:36.677 "max_connections_per_session": 2, 00:05:36.677 "max_queue_depth": 64, 00:05:36.677 "default_time2wait": 2, 00:05:36.677 "default_time2retain": 20, 00:05:36.677 "first_burst_length": 8192, 00:05:36.677 "immediate_data": true, 00:05:36.677 "allow_duplicated_isid": false, 00:05:36.677 "error_recovery_level": 0, 00:05:36.677 "nop_timeout": 60, 00:05:36.677 "nop_in_interval": 30, 00:05:36.677 "disable_chap": false, 00:05:36.677 "require_chap": false, 00:05:36.677 "mutual_chap": false, 00:05:36.677 "chap_group": 0, 00:05:36.677 "max_large_datain_per_connection": 64, 00:05:36.677 "max_r2t_per_connection": 4, 00:05:36.677 "pdu_pool_size": 36864, 00:05:36.677 "immediate_data_pool_size": 16384, 00:05:36.677 "data_out_pool_size": 2048 00:05:36.677 } 00:05:36.677 } 00:05:36.677 ] 00:05:36.677 } 00:05:36.677 ] 00:05:36.677 } 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71284 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71284 ']' 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71284 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71284 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71284' 00:05:36.677 killing process with pid 71284 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71284 00:05:36.677 00:50:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71284 00:05:36.937 00:50:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71307 00:05:36.937 00:50:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:36.937 00:50:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71307 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 71307 ']' 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 71307 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71307 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.202 killing process with pid 71307 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71307' 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 71307 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 71307 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:42.202 00:05:42.202 real 0m6.580s 00:05:42.202 user 0m6.282s 00:05:42.202 sys 0m0.522s 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.202 ************************************ 00:05:42.202 END TEST skip_rpc_with_json 00:05:42.202 ************************************ 00:05:42.202 00:51:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:42.202 00:51:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.202 00:51:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.202 00:51:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.202 ************************************ 00:05:42.202 START TEST skip_rpc_with_delay 00:05:42.202 ************************************ 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:42.202 00:51:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.202 [2024-11-26 00:51:05.048624] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:42.202 00:51:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:42.202 00:51:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:42.202 00:51:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:42.202 00:51:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:42.202 00:05:42.202 real 0m0.133s 00:05:42.202 user 0m0.071s 00:05:42.202 sys 0m0.060s 00:05:42.202 ************************************ 00:05:42.202 END TEST skip_rpc_with_delay 00:05:42.202 00:51:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.202 00:51:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:42.202 ************************************ 00:05:42.461 00:51:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:42.461 00:51:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:42.461 00:51:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:42.461 00:51:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.461 00:51:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.461 00:51:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 ************************************ 00:05:42.461 START TEST exit_on_failed_rpc_init 00:05:42.461 ************************************ 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71419 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71419 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 71419 ']' 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.461 00:51:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.461 [2024-11-26 00:51:05.230652] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:42.461 [2024-11-26 00:51:05.230811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71419 ] 00:05:42.461 [2024-11-26 00:51:05.368052] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:42.718 [2024-11-26 00:51:05.393023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.718 [2024-11-26 00:51:05.418369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:43.284 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.285 [2024-11-26 00:51:06.148035] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:43.285 [2024-11-26 00:51:06.148145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71431 ] 00:05:43.544 [2024-11-26 00:51:06.279578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:43.544 [2024-11-26 00:51:06.309408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.544 [2024-11-26 00:51:06.327540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.544 [2024-11-26 00:51:06.327621] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:43.544 [2024-11-26 00:51:06.327633] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:43.544 [2024-11-26 00:51:06.327648] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71419 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 71419 ']' 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 71419 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71419 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.544 killing process with pid 71419 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71419' 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 71419 00:05:43.544 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 71419 00:05:43.802 00:05:43.802 real 0m1.499s 00:05:43.802 user 0m1.628s 00:05:43.802 sys 0m0.416s 00:05:43.802 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.802 ************************************ 00:05:43.802 END TEST exit_on_failed_rpc_init 00:05:43.802 ************************************ 00:05:43.802 00:51:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.802 00:51:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:43.802 ************************************ 00:05:43.802 00:05:43.802 real 0m13.824s 00:05:43.802 user 0m13.028s 00:05:43.802 sys 0m1.418s 00:05:43.802 00:51:06 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.802 00:51:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.802 END TEST skip_rpc 00:05:43.802 ************************************ 00:05:44.062 00:51:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:44.062 00:51:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.062 00:51:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.062 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.062 ************************************ 00:05:44.062 START TEST rpc_client 00:05:44.062 ************************************ 00:05:44.062 00:51:06 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:44.062 * Looking for test storage... 00:05:44.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:44.062 00:51:06 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.062 00:51:06 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.062 00:51:06 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.062 00:51:06 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.062 00:51:06 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:44.062 00:51:06 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.062 00:51:06 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.062 --rc genhtml_branch_coverage=1 00:05:44.062 --rc genhtml_function_coverage=1 00:05:44.062 --rc genhtml_legend=1 00:05:44.062 --rc geninfo_all_blocks=1 00:05:44.062 --rc geninfo_unexecuted_blocks=1 00:05:44.062 00:05:44.062 ' 00:05:44.062 00:51:06 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.062 --rc genhtml_branch_coverage=1 00:05:44.062 --rc genhtml_function_coverage=1 00:05:44.062 --rc genhtml_legend=1 00:05:44.063 --rc geninfo_all_blocks=1 00:05:44.063 --rc geninfo_unexecuted_blocks=1 00:05:44.063 00:05:44.063 ' 00:05:44.063 00:51:06 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.063 --rc genhtml_branch_coverage=1 00:05:44.063 --rc genhtml_function_coverage=1 00:05:44.063 --rc genhtml_legend=1 00:05:44.063 --rc geninfo_all_blocks=1 00:05:44.063 --rc geninfo_unexecuted_blocks=1 00:05:44.063 00:05:44.063 ' 00:05:44.063 00:51:06 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.063 --rc genhtml_branch_coverage=1 00:05:44.063 --rc genhtml_function_coverage=1 00:05:44.063 --rc genhtml_legend=1 00:05:44.063 --rc geninfo_all_blocks=1 00:05:44.063 --rc geninfo_unexecuted_blocks=1 00:05:44.063 00:05:44.063 ' 00:05:44.063 00:51:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:44.063 OK 00:05:44.063 00:51:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:44.063 00:05:44.063 real 0m0.189s 00:05:44.063 user 0m0.119s 00:05:44.063 sys 0m0.080s 00:05:44.063 00:51:06 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.063 ************************************ 00:05:44.063 END TEST rpc_client 00:05:44.063 ************************************ 00:05:44.063 00:51:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:44.323 00:51:06 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:44.323 00:51:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.323 00:51:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.323 00:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:44.323 ************************************ 00:05:44.323 START TEST json_config 00:05:44.323 ************************************ 00:05:44.323 00:51:06 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:44.323 00:51:07 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.323 00:51:07 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.323 00:51:07 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.323 00:51:07 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.323 00:51:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.323 00:51:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.323 00:51:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.323 00:51:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.323 00:51:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.323 00:51:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.323 00:51:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.323 00:51:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.323 00:51:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.323 00:51:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.323 00:51:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.323 00:51:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:44.323 00:51:07 json_config -- scripts/common.sh@345 -- # : 1 00:05:44.323 00:51:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.323 00:51:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.323 00:51:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:44.324 00:51:07 json_config -- scripts/common.sh@353 -- # local d=1 00:05:44.324 00:51:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.324 00:51:07 json_config -- scripts/common.sh@355 -- # echo 1 00:05:44.324 00:51:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.324 00:51:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:44.324 00:51:07 json_config -- scripts/common.sh@353 -- # local d=2 00:05:44.324 00:51:07 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.324 00:51:07 json_config -- scripts/common.sh@355 -- # echo 2 00:05:44.324 00:51:07 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.324 00:51:07 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.324 00:51:07 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.324 00:51:07 json_config -- scripts/common.sh@368 -- # return 0 00:05:44.324 00:51:07 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.324 00:51:07 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.324 --rc genhtml_branch_coverage=1 00:05:44.324 --rc genhtml_function_coverage=1 00:05:44.324 --rc genhtml_legend=1 00:05:44.324 --rc geninfo_all_blocks=1 00:05:44.324 --rc geninfo_unexecuted_blocks=1 00:05:44.324 00:05:44.324 ' 00:05:44.324 00:51:07 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.324 --rc genhtml_branch_coverage=1 00:05:44.324 --rc genhtml_function_coverage=1 00:05:44.324 --rc genhtml_legend=1 00:05:44.324 --rc geninfo_all_blocks=1 00:05:44.324 --rc geninfo_unexecuted_blocks=1 00:05:44.324 00:05:44.324 ' 00:05:44.324 00:51:07 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.324 --rc genhtml_branch_coverage=1 00:05:44.325 --rc genhtml_function_coverage=1 00:05:44.325 --rc genhtml_legend=1 00:05:44.325 --rc geninfo_all_blocks=1 00:05:44.325 --rc geninfo_unexecuted_blocks=1 00:05:44.325 00:05:44.325 ' 00:05:44.325 00:51:07 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.325 --rc genhtml_branch_coverage=1 00:05:44.325 --rc genhtml_function_coverage=1 00:05:44.325 --rc genhtml_legend=1 00:05:44.325 --rc geninfo_all_blocks=1 00:05:44.325 --rc geninfo_unexecuted_blocks=1 00:05:44.325 00:05:44.325 ' 00:05:44.325 00:51:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2791c0d3-aeea-4ee5-88bc-d866b798a508 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2791c0d3-aeea-4ee5-88bc-d866b798a508 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.325 00:51:07 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:44.325 00:51:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.326 00:51:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.326 00:51:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.326 00:51:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.326 00:51:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.326 00:51:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.326 00:51:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.326 00:51:07 json_config -- paths/export.sh@5 -- # export PATH 00:05:44.326 00:51:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@51 -- # : 0 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.326 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.326 00:51:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.326 00:51:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:44.326 00:51:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:44.327 00:51:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:44.327 00:51:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:44.327 00:51:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:44.327 WARNING: No tests are enabled so not running JSON configuration tests 00:05:44.327 00:51:07 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:44.327 00:51:07 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:44.327 00:05:44.327 real 0m0.142s 00:05:44.327 user 0m0.096s 00:05:44.327 sys 0m0.048s 00:05:44.327 00:51:07 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.327 00:51:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.327 ************************************ 00:05:44.327 END TEST json_config 00:05:44.327 ************************************ 00:05:44.327 00:51:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:44.327 00:51:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.327 00:51:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.327 00:51:07 -- common/autotest_common.sh@10 -- # set +x 00:05:44.327 ************************************ 00:05:44.327 START TEST json_config_extra_key 00:05:44.327 ************************************ 00:05:44.327 00:51:07 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:44.327 00:51:07 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:44.327 00:51:07 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:44.327 00:51:07 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:44.596 00:51:07 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:44.596 00:51:07 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.596 00:51:07 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:44.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.596 --rc genhtml_branch_coverage=1 00:05:44.596 --rc genhtml_function_coverage=1 00:05:44.596 --rc genhtml_legend=1 00:05:44.596 --rc geninfo_all_blocks=1 00:05:44.596 --rc geninfo_unexecuted_blocks=1 00:05:44.596 00:05:44.596 ' 00:05:44.596 00:51:07 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:44.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.596 --rc genhtml_branch_coverage=1 00:05:44.596 --rc genhtml_function_coverage=1 00:05:44.596 --rc genhtml_legend=1 00:05:44.596 --rc geninfo_all_blocks=1 00:05:44.596 --rc geninfo_unexecuted_blocks=1 00:05:44.596 00:05:44.596 ' 00:05:44.596 00:51:07 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:44.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.596 --rc genhtml_branch_coverage=1 00:05:44.596 --rc genhtml_function_coverage=1 00:05:44.596 --rc genhtml_legend=1 00:05:44.596 --rc geninfo_all_blocks=1 00:05:44.596 --rc geninfo_unexecuted_blocks=1 00:05:44.596 00:05:44.596 ' 00:05:44.596 00:51:07 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:44.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.596 --rc genhtml_branch_coverage=1 00:05:44.596 --rc genhtml_function_coverage=1 00:05:44.596 --rc genhtml_legend=1 00:05:44.596 --rc geninfo_all_blocks=1 00:05:44.596 --rc geninfo_unexecuted_blocks=1 00:05:44.596 00:05:44.596 ' 00:05:44.596 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2791c0d3-aeea-4ee5-88bc-d866b798a508 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2791c0d3-aeea-4ee5-88bc-d866b798a508 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.596 00:51:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.596 00:51:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.596 00:51:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.597 00:51:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.597 00:51:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.597 00:51:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:44.597 00:51:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:44.597 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:44.597 00:51:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.597 INFO: launching applications... 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:44.597 00:51:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71614 00:05:44.597 Waiting for target to run... 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71614 /var/tmp/spdk_tgt.sock 00:05:44.597 00:51:07 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 71614 ']' 00:05:44.597 00:51:07 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.597 00:51:07 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.597 00:51:07 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.597 00:51:07 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:44.597 00:51:07 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.597 00:51:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:44.597 [2024-11-26 00:51:07.406899] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:44.597 [2024-11-26 00:51:07.407016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71614 ] 00:05:44.856 [2024-11-26 00:51:07.710568] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:44.856 [2024-11-26 00:51:07.736562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.856 [2024-11-26 00:51:07.750416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.423 00:51:08 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.423 00:05:45.423 00:51:08 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:45.423 00:51:08 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:45.423 00:51:08 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:45.423 INFO: shutting down applications... 00:05:45.423 00:51:08 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:45.423 00:51:08 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:45.423 00:51:08 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:45.423 00:51:08 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71614 ]] 00:05:45.423 00:51:08 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71614 00:05:45.423 00:51:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:45.423 00:51:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.423 00:51:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71614 00:05:45.423 00:51:08 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.994 00:51:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.994 00:51:08 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.994 00:51:08 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71614 00:05:45.994 00:51:08 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:45.994 SPDK target shutdown done 00:05:45.994 Success 00:05:45.994 00:51:08 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:45.994 00:51:08 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:45.994 00:51:08 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:45.994 00:51:08 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:45.994 00:05:45.994 real 0m1.561s 00:05:45.994 user 0m1.212s 00:05:45.994 sys 0m0.360s 00:05:45.994 ************************************ 00:05:45.994 END TEST json_config_extra_key 00:05:45.994 ************************************ 00:05:45.994 00:51:08 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.994 00:51:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.994 00:51:08 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.994 00:51:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.994 00:51:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.994 00:51:08 -- common/autotest_common.sh@10 -- # set +x 00:05:45.994 ************************************ 00:05:45.994 START TEST alias_rpc 00:05:45.994 ************************************ 00:05:45.994 00:51:08 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:45.994 * Looking for test storage... 00:05:45.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:45.994 00:51:08 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.994 00:51:08 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.994 00:51:08 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:46.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.256 00:51:08 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.256 --rc genhtml_branch_coverage=1 00:05:46.256 --rc genhtml_function_coverage=1 00:05:46.256 --rc genhtml_legend=1 00:05:46.256 --rc geninfo_all_blocks=1 00:05:46.256 --rc geninfo_unexecuted_blocks=1 00:05:46.256 00:05:46.256 ' 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.256 --rc genhtml_branch_coverage=1 00:05:46.256 --rc genhtml_function_coverage=1 00:05:46.256 --rc genhtml_legend=1 00:05:46.256 --rc geninfo_all_blocks=1 00:05:46.256 --rc geninfo_unexecuted_blocks=1 00:05:46.256 00:05:46.256 ' 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.256 --rc genhtml_branch_coverage=1 00:05:46.256 --rc genhtml_function_coverage=1 00:05:46.256 --rc genhtml_legend=1 00:05:46.256 --rc geninfo_all_blocks=1 00:05:46.256 --rc geninfo_unexecuted_blocks=1 00:05:46.256 00:05:46.256 ' 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.256 --rc genhtml_branch_coverage=1 00:05:46.256 --rc genhtml_function_coverage=1 00:05:46.256 --rc genhtml_legend=1 00:05:46.256 --rc geninfo_all_blocks=1 00:05:46.256 --rc geninfo_unexecuted_blocks=1 00:05:46.256 00:05:46.256 ' 00:05:46.256 00:51:08 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:46.256 00:51:08 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71687 00:05:46.256 00:51:08 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71687 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 71687 ']' 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.256 00:51:08 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.256 00:51:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.256 [2024-11-26 00:51:09.033968] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:46.256 [2024-11-26 00:51:09.034127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71687 ] 00:05:46.517 [2024-11-26 00:51:09.173417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:46.517 [2024-11-26 00:51:09.204859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.517 [2024-11-26 00:51:09.235357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.088 00:51:09 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.088 00:51:09 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:47.088 00:51:09 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:47.349 00:51:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71687 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 71687 ']' 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 71687 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71687 00:05:47.349 killing process with pid 71687 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71687' 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@973 -- # kill 71687 00:05:47.349 00:51:10 alias_rpc -- common/autotest_common.sh@978 -- # wait 71687 00:05:47.609 ************************************ 00:05:47.609 END TEST alias_rpc 00:05:47.609 ************************************ 00:05:47.609 00:05:47.609 real 0m1.689s 00:05:47.609 user 0m1.773s 00:05:47.609 sys 0m0.464s 00:05:47.609 00:51:10 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.609 00:51:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.870 00:51:10 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:47.870 00:51:10 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:47.870 00:51:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.870 00:51:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.870 00:51:10 -- common/autotest_common.sh@10 -- # set +x 00:05:47.870 ************************************ 00:05:47.870 START TEST spdkcli_tcp 00:05:47.870 ************************************ 00:05:47.870 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:47.870 * Looking for test storage... 00:05:47.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:47.870 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.870 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.870 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.870 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.870 00:51:10 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:47.870 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.870 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.870 --rc genhtml_branch_coverage=1 00:05:47.870 --rc genhtml_function_coverage=1 00:05:47.870 --rc genhtml_legend=1 00:05:47.870 --rc geninfo_all_blocks=1 00:05:47.870 --rc geninfo_unexecuted_blocks=1 00:05:47.870 00:05:47.870 ' 00:05:47.870 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.870 --rc genhtml_branch_coverage=1 00:05:47.870 --rc genhtml_function_coverage=1 00:05:47.870 --rc genhtml_legend=1 00:05:47.870 --rc geninfo_all_blocks=1 00:05:47.870 --rc geninfo_unexecuted_blocks=1 00:05:47.870 00:05:47.871 ' 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.871 --rc genhtml_branch_coverage=1 00:05:47.871 --rc genhtml_function_coverage=1 00:05:47.871 --rc genhtml_legend=1 00:05:47.871 --rc geninfo_all_blocks=1 00:05:47.871 --rc geninfo_unexecuted_blocks=1 00:05:47.871 00:05:47.871 ' 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.871 --rc genhtml_branch_coverage=1 00:05:47.871 --rc genhtml_function_coverage=1 00:05:47.871 --rc genhtml_legend=1 00:05:47.871 --rc geninfo_all_blocks=1 00:05:47.871 --rc geninfo_unexecuted_blocks=1 00:05:47.871 00:05:47.871 ' 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71771 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71771 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71771 ']' 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.871 00:51:10 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.871 00:51:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.871 [2024-11-26 00:51:10.773423] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:47.871 [2024-11-26 00:51:10.773540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71771 ] 00:05:48.133 [2024-11-26 00:51:10.907655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:48.133 [2024-11-26 00:51:10.937232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.133 [2024-11-26 00:51:10.966653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.133 [2024-11-26 00:51:10.966698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.706 00:51:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.706 00:51:11 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:48.706 00:51:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71784 00:05:48.706 00:51:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:48.706 00:51:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:48.968 [ 00:05:48.968 "bdev_malloc_delete", 00:05:48.968 "bdev_malloc_create", 00:05:48.968 "bdev_null_resize", 00:05:48.968 "bdev_null_delete", 00:05:48.968 "bdev_null_create", 00:05:48.968 "bdev_nvme_cuse_unregister", 00:05:48.968 "bdev_nvme_cuse_register", 00:05:48.968 "bdev_opal_new_user", 00:05:48.968 "bdev_opal_set_lock_state", 00:05:48.968 "bdev_opal_delete", 00:05:48.968 "bdev_opal_get_info", 00:05:48.968 "bdev_opal_create", 00:05:48.968 "bdev_nvme_opal_revert", 00:05:48.968 "bdev_nvme_opal_init", 00:05:48.968 "bdev_nvme_send_cmd", 00:05:48.968 "bdev_nvme_set_keys", 00:05:48.968 "bdev_nvme_get_path_iostat", 00:05:48.968 "bdev_nvme_get_mdns_discovery_info", 00:05:48.968 "bdev_nvme_stop_mdns_discovery", 00:05:48.968 "bdev_nvme_start_mdns_discovery", 00:05:48.968 "bdev_nvme_set_multipath_policy", 00:05:48.968 "bdev_nvme_set_preferred_path", 00:05:48.968 "bdev_nvme_get_io_paths", 00:05:48.968 "bdev_nvme_remove_error_injection", 00:05:48.968 "bdev_nvme_add_error_injection", 00:05:48.968 "bdev_nvme_get_discovery_info", 00:05:48.968 "bdev_nvme_stop_discovery", 00:05:48.968 "bdev_nvme_start_discovery", 00:05:48.968 "bdev_nvme_get_controller_health_info", 00:05:48.968 "bdev_nvme_disable_controller", 00:05:48.968 "bdev_nvme_enable_controller", 00:05:48.968 "bdev_nvme_reset_controller", 00:05:48.968 "bdev_nvme_get_transport_statistics", 00:05:48.968 "bdev_nvme_apply_firmware", 00:05:48.968 "bdev_nvme_detach_controller", 00:05:48.968 "bdev_nvme_get_controllers", 00:05:48.968 "bdev_nvme_attach_controller", 00:05:48.968 "bdev_nvme_set_hotplug", 00:05:48.968 "bdev_nvme_set_options", 00:05:48.968 "bdev_passthru_delete", 00:05:48.968 "bdev_passthru_create", 00:05:48.968 "bdev_lvol_set_parent_bdev", 00:05:48.968 "bdev_lvol_set_parent", 00:05:48.968 "bdev_lvol_check_shallow_copy", 00:05:48.968 "bdev_lvol_start_shallow_copy", 00:05:48.968 "bdev_lvol_grow_lvstore", 00:05:48.968 "bdev_lvol_get_lvols", 00:05:48.968 "bdev_lvol_get_lvstores", 00:05:48.968 "bdev_lvol_delete", 00:05:48.968 "bdev_lvol_set_read_only", 00:05:48.968 "bdev_lvol_resize", 00:05:48.968 "bdev_lvol_decouple_parent", 00:05:48.968 "bdev_lvol_inflate", 00:05:48.968 "bdev_lvol_rename", 00:05:48.968 "bdev_lvol_clone_bdev", 00:05:48.968 "bdev_lvol_clone", 00:05:48.968 "bdev_lvol_snapshot", 00:05:48.968 "bdev_lvol_create", 00:05:48.968 "bdev_lvol_delete_lvstore", 00:05:48.968 "bdev_lvol_rename_lvstore", 00:05:48.968 "bdev_lvol_create_lvstore", 00:05:48.968 "bdev_raid_set_options", 00:05:48.968 "bdev_raid_remove_base_bdev", 00:05:48.968 "bdev_raid_add_base_bdev", 00:05:48.968 "bdev_raid_delete", 00:05:48.968 "bdev_raid_create", 00:05:48.968 "bdev_raid_get_bdevs", 00:05:48.968 "bdev_error_inject_error", 00:05:48.968 "bdev_error_delete", 00:05:48.968 "bdev_error_create", 00:05:48.968 "bdev_split_delete", 00:05:48.968 "bdev_split_create", 00:05:48.968 "bdev_delay_delete", 00:05:48.968 "bdev_delay_create", 00:05:48.968 "bdev_delay_update_latency", 00:05:48.968 "bdev_zone_block_delete", 00:05:48.968 "bdev_zone_block_create", 00:05:48.968 "blobfs_create", 00:05:48.968 "blobfs_detect", 00:05:48.968 "blobfs_set_cache_size", 00:05:48.968 "bdev_xnvme_delete", 00:05:48.968 "bdev_xnvme_create", 00:05:48.968 "bdev_aio_delete", 00:05:48.968 "bdev_aio_rescan", 00:05:48.968 "bdev_aio_create", 00:05:48.968 "bdev_ftl_set_property", 00:05:48.968 "bdev_ftl_get_properties", 00:05:48.968 "bdev_ftl_get_stats", 00:05:48.968 "bdev_ftl_unmap", 00:05:48.969 "bdev_ftl_unload", 00:05:48.969 "bdev_ftl_delete", 00:05:48.969 "bdev_ftl_load", 00:05:48.969 "bdev_ftl_create", 00:05:48.969 "bdev_virtio_attach_controller", 00:05:48.969 "bdev_virtio_scsi_get_devices", 00:05:48.969 "bdev_virtio_detach_controller", 00:05:48.969 "bdev_virtio_blk_set_hotplug", 00:05:48.969 "bdev_iscsi_delete", 00:05:48.969 "bdev_iscsi_create", 00:05:48.969 "bdev_iscsi_set_options", 00:05:48.969 "accel_error_inject_error", 00:05:48.969 "ioat_scan_accel_module", 00:05:48.969 "dsa_scan_accel_module", 00:05:48.969 "iaa_scan_accel_module", 00:05:48.969 "keyring_file_remove_key", 00:05:48.969 "keyring_file_add_key", 00:05:48.969 "keyring_linux_set_options", 00:05:48.969 "fsdev_aio_delete", 00:05:48.969 "fsdev_aio_create", 00:05:48.969 "iscsi_get_histogram", 00:05:48.969 "iscsi_enable_histogram", 00:05:48.969 "iscsi_set_options", 00:05:48.969 "iscsi_get_auth_groups", 00:05:48.969 "iscsi_auth_group_remove_secret", 00:05:48.969 "iscsi_auth_group_add_secret", 00:05:48.969 "iscsi_delete_auth_group", 00:05:48.969 "iscsi_create_auth_group", 00:05:48.969 "iscsi_set_discovery_auth", 00:05:48.969 "iscsi_get_options", 00:05:48.969 "iscsi_target_node_request_logout", 00:05:48.969 "iscsi_target_node_set_redirect", 00:05:48.969 "iscsi_target_node_set_auth", 00:05:48.969 "iscsi_target_node_add_lun", 00:05:48.969 "iscsi_get_stats", 00:05:48.969 "iscsi_get_connections", 00:05:48.969 "iscsi_portal_group_set_auth", 00:05:48.969 "iscsi_start_portal_group", 00:05:48.969 "iscsi_delete_portal_group", 00:05:48.969 "iscsi_create_portal_group", 00:05:48.969 "iscsi_get_portal_groups", 00:05:48.969 "iscsi_delete_target_node", 00:05:48.969 "iscsi_target_node_remove_pg_ig_maps", 00:05:48.969 "iscsi_target_node_add_pg_ig_maps", 00:05:48.969 "iscsi_create_target_node", 00:05:48.969 "iscsi_get_target_nodes", 00:05:48.969 "iscsi_delete_initiator_group", 00:05:48.969 "iscsi_initiator_group_remove_initiators", 00:05:48.969 "iscsi_initiator_group_add_initiators", 00:05:48.969 "iscsi_create_initiator_group", 00:05:48.969 "iscsi_get_initiator_groups", 00:05:48.969 "nvmf_set_crdt", 00:05:48.969 "nvmf_set_config", 00:05:48.969 "nvmf_set_max_subsystems", 00:05:48.969 "nvmf_stop_mdns_prr", 00:05:48.969 "nvmf_publish_mdns_prr", 00:05:48.969 "nvmf_subsystem_get_listeners", 00:05:48.969 "nvmf_subsystem_get_qpairs", 00:05:48.969 "nvmf_subsystem_get_controllers", 00:05:48.969 "nvmf_get_stats", 00:05:48.969 "nvmf_get_transports", 00:05:48.969 "nvmf_create_transport", 00:05:48.969 "nvmf_get_targets", 00:05:48.969 "nvmf_delete_target", 00:05:48.969 "nvmf_create_target", 00:05:48.969 "nvmf_subsystem_allow_any_host", 00:05:48.969 "nvmf_subsystem_set_keys", 00:05:48.969 "nvmf_subsystem_remove_host", 00:05:48.969 "nvmf_subsystem_add_host", 00:05:48.969 "nvmf_ns_remove_host", 00:05:48.969 "nvmf_ns_add_host", 00:05:48.969 "nvmf_subsystem_remove_ns", 00:05:48.969 "nvmf_subsystem_set_ns_ana_group", 00:05:48.969 "nvmf_subsystem_add_ns", 00:05:48.969 "nvmf_subsystem_listener_set_ana_state", 00:05:48.969 "nvmf_discovery_get_referrals", 00:05:48.969 "nvmf_discovery_remove_referral", 00:05:48.969 "nvmf_discovery_add_referral", 00:05:48.969 "nvmf_subsystem_remove_listener", 00:05:48.969 "nvmf_subsystem_add_listener", 00:05:48.969 "nvmf_delete_subsystem", 00:05:48.969 "nvmf_create_subsystem", 00:05:48.969 "nvmf_get_subsystems", 00:05:48.969 "env_dpdk_get_mem_stats", 00:05:48.969 "nbd_get_disks", 00:05:48.969 "nbd_stop_disk", 00:05:48.969 "nbd_start_disk", 00:05:48.969 "ublk_recover_disk", 00:05:48.969 "ublk_get_disks", 00:05:48.969 "ublk_stop_disk", 00:05:48.969 "ublk_start_disk", 00:05:48.969 "ublk_destroy_target", 00:05:48.969 "ublk_create_target", 00:05:48.969 "virtio_blk_create_transport", 00:05:48.969 "virtio_blk_get_transports", 00:05:48.969 "vhost_controller_set_coalescing", 00:05:48.969 "vhost_get_controllers", 00:05:48.969 "vhost_delete_controller", 00:05:48.969 "vhost_create_blk_controller", 00:05:48.969 "vhost_scsi_controller_remove_target", 00:05:48.969 "vhost_scsi_controller_add_target", 00:05:48.969 "vhost_start_scsi_controller", 00:05:48.969 "vhost_create_scsi_controller", 00:05:48.969 "thread_set_cpumask", 00:05:48.969 "scheduler_set_options", 00:05:48.969 "framework_get_governor", 00:05:48.969 "framework_get_scheduler", 00:05:48.969 "framework_set_scheduler", 00:05:48.969 "framework_get_reactors", 00:05:48.969 "thread_get_io_channels", 00:05:48.969 "thread_get_pollers", 00:05:48.969 "thread_get_stats", 00:05:48.969 "framework_monitor_context_switch", 00:05:48.969 "spdk_kill_instance", 00:05:48.969 "log_enable_timestamps", 00:05:48.969 "log_get_flags", 00:05:48.969 "log_clear_flag", 00:05:48.969 "log_set_flag", 00:05:48.969 "log_get_level", 00:05:48.969 "log_set_level", 00:05:48.969 "log_get_print_level", 00:05:48.969 "log_set_print_level", 00:05:48.969 "framework_enable_cpumask_locks", 00:05:48.969 "framework_disable_cpumask_locks", 00:05:48.969 "framework_wait_init", 00:05:48.969 "framework_start_init", 00:05:48.969 "scsi_get_devices", 00:05:48.969 "bdev_get_histogram", 00:05:48.969 "bdev_enable_histogram", 00:05:48.969 "bdev_set_qos_limit", 00:05:48.969 "bdev_set_qd_sampling_period", 00:05:48.969 "bdev_get_bdevs", 00:05:48.969 "bdev_reset_iostat", 00:05:48.969 "bdev_get_iostat", 00:05:48.969 "bdev_examine", 00:05:48.969 "bdev_wait_for_examine", 00:05:48.969 "bdev_set_options", 00:05:48.969 "accel_get_stats", 00:05:48.969 "accel_set_options", 00:05:48.969 "accel_set_driver", 00:05:48.969 "accel_crypto_key_destroy", 00:05:48.969 "accel_crypto_keys_get", 00:05:48.969 "accel_crypto_key_create", 00:05:48.969 "accel_assign_opc", 00:05:48.969 "accel_get_module_info", 00:05:48.969 "accel_get_opc_assignments", 00:05:48.969 "vmd_rescan", 00:05:48.969 "vmd_remove_device", 00:05:48.969 "vmd_enable", 00:05:48.969 "sock_get_default_impl", 00:05:48.969 "sock_set_default_impl", 00:05:48.969 "sock_impl_set_options", 00:05:48.969 "sock_impl_get_options", 00:05:48.969 "iobuf_get_stats", 00:05:48.969 "iobuf_set_options", 00:05:48.969 "keyring_get_keys", 00:05:48.969 "framework_get_pci_devices", 00:05:48.969 "framework_get_config", 00:05:48.969 "framework_get_subsystems", 00:05:48.969 "fsdev_set_opts", 00:05:48.969 "fsdev_get_opts", 00:05:48.969 "trace_get_info", 00:05:48.969 "trace_get_tpoint_group_mask", 00:05:48.969 "trace_disable_tpoint_group", 00:05:48.969 "trace_enable_tpoint_group", 00:05:48.969 "trace_clear_tpoint_mask", 00:05:48.969 "trace_set_tpoint_mask", 00:05:48.969 "notify_get_notifications", 00:05:48.969 "notify_get_types", 00:05:48.969 "spdk_get_version", 00:05:48.969 "rpc_get_methods" 00:05:48.969 ] 00:05:48.969 00:51:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:48.969 00:51:11 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.969 00:51:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.245 00:51:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:49.245 00:51:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71771 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71771 ']' 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71771 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71771 00:05:49.245 killing process with pid 71771 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71771' 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71771 00:05:49.245 00:51:11 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71771 00:05:49.537 ************************************ 00:05:49.537 END TEST spdkcli_tcp 00:05:49.537 ************************************ 00:05:49.537 00:05:49.537 real 0m1.673s 00:05:49.537 user 0m2.911s 00:05:49.537 sys 0m0.494s 00:05:49.537 00:51:12 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.537 00:51:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.537 00:51:12 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.537 00:51:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.537 00:51:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.537 00:51:12 -- common/autotest_common.sh@10 -- # set +x 00:05:49.537 ************************************ 00:05:49.537 START TEST dpdk_mem_utility 00:05:49.537 ************************************ 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.537 * Looking for test storage... 00:05:49.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:49.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.537 00:51:12 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.537 --rc genhtml_branch_coverage=1 00:05:49.537 --rc genhtml_function_coverage=1 00:05:49.537 --rc genhtml_legend=1 00:05:49.537 --rc geninfo_all_blocks=1 00:05:49.537 --rc geninfo_unexecuted_blocks=1 00:05:49.537 00:05:49.537 ' 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.537 --rc genhtml_branch_coverage=1 00:05:49.537 --rc genhtml_function_coverage=1 00:05:49.537 --rc genhtml_legend=1 00:05:49.537 --rc geninfo_all_blocks=1 00:05:49.537 --rc geninfo_unexecuted_blocks=1 00:05:49.537 00:05:49.537 ' 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.537 --rc genhtml_branch_coverage=1 00:05:49.537 --rc genhtml_function_coverage=1 00:05:49.537 --rc genhtml_legend=1 00:05:49.537 --rc geninfo_all_blocks=1 00:05:49.537 --rc geninfo_unexecuted_blocks=1 00:05:49.537 00:05:49.537 ' 00:05:49.537 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.537 --rc genhtml_branch_coverage=1 00:05:49.537 --rc genhtml_function_coverage=1 00:05:49.537 --rc genhtml_legend=1 00:05:49.537 --rc geninfo_all_blocks=1 00:05:49.537 --rc geninfo_unexecuted_blocks=1 00:05:49.537 00:05:49.537 ' 00:05:49.538 00:51:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:49.538 00:51:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71867 00:05:49.538 00:51:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71867 00:05:49.538 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71867 ']' 00:05:49.538 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.538 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.538 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.538 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.538 00:51:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.538 00:51:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.799 [2024-11-26 00:51:12.516954] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:49.799 [2024-11-26 00:51:12.517106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71867 ] 00:05:49.799 [2024-11-26 00:51:12.652561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.799 [2024-11-26 00:51:12.678462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.799 [2024-11-26 00:51:12.707268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.743 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.743 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:50.743 00:51:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:50.743 00:51:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:50.743 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.743 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.743 { 00:05:50.743 "filename": "/tmp/spdk_mem_dump.txt" 00:05:50.743 } 00:05:50.743 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.743 00:51:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:50.743 DPDK memory size 810.000000 MiB in 1 heap(s) 00:05:50.743 1 heaps totaling size 810.000000 MiB 00:05:50.743 size: 810.000000 MiB heap id: 0 00:05:50.743 end heaps---------- 00:05:50.743 9 mempools totaling size 595.772034 MiB 00:05:50.743 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:50.743 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:50.743 size: 92.545471 MiB name: bdev_io_71867 00:05:50.743 size: 50.003479 MiB name: msgpool_71867 00:05:50.743 size: 36.509338 MiB name: fsdev_io_71867 00:05:50.743 size: 21.763794 MiB name: PDU_Pool 00:05:50.743 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:50.743 size: 4.133484 MiB name: evtpool_71867 00:05:50.743 size: 0.026123 MiB name: Session_Pool 00:05:50.743 end mempools------- 00:05:50.743 6 memzones totaling size 4.142822 MiB 00:05:50.743 size: 1.000366 MiB name: RG_ring_0_71867 00:05:50.743 size: 1.000366 MiB name: RG_ring_1_71867 00:05:50.743 size: 1.000366 MiB name: RG_ring_4_71867 00:05:50.743 size: 1.000366 MiB name: RG_ring_5_71867 00:05:50.743 size: 0.125366 MiB name: RG_ring_2_71867 00:05:50.743 size: 0.015991 MiB name: RG_ring_3_71867 00:05:50.743 end memzones------- 00:05:50.743 00:51:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:50.743 heap id: 0 total size: 810.000000 MiB number of busy elements: 310 number of free elements: 15 00:05:50.743 list of free elements. size: 10.954346 MiB 00:05:50.744 element at address: 0x200018a00000 with size: 0.999878 MiB 00:05:50.744 element at address: 0x200018c00000 with size: 0.999878 MiB 00:05:50.744 element at address: 0x200031800000 with size: 0.994446 MiB 00:05:50.744 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:50.744 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:50.744 element at address: 0x200012c00000 with size: 0.954285 MiB 00:05:50.744 element at address: 0x200018e00000 with size: 0.936584 MiB 00:05:50.744 element at address: 0x200000200000 with size: 0.858093 MiB 00:05:50.744 element at address: 0x20001a600000 with size: 0.568054 MiB 00:05:50.744 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:50.744 element at address: 0x200000c00000 with size: 0.487000 MiB 00:05:50.744 element at address: 0x200019000000 with size: 0.485657 MiB 00:05:50.744 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:50.744 element at address: 0x200027a00000 with size: 0.395752 MiB 00:05:50.744 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:50.744 list of standard malloc elements. size: 199.126770 MiB 00:05:50.744 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:50.744 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:50.744 element at address: 0x200018afff80 with size: 1.000122 MiB 00:05:50.744 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:05:50.744 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:50.744 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:05:50.744 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:50.744 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:05:50.744 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:50.744 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:50.744 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691780 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691840 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691900 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692080 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692140 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692200 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692380 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692440 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692500 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692680 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692740 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692800 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692980 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693040 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693100 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693280 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693340 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693400 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693580 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693640 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693700 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693880 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693940 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694000 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694180 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694240 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694300 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694480 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694540 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694600 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694780 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694840 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694900 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a695080 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a695140 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a695200 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a695380 with size: 0.000183 MiB 00:05:50.745 element at address: 0x20001a695440 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a65500 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:05:50.745 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:05:50.746 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:05:50.746 list of memzone associated elements. size: 599.918884 MiB 00:05:50.746 element at address: 0x20001a695500 with size: 211.416748 MiB 00:05:50.746 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:50.746 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:05:50.746 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:50.746 element at address: 0x200012df4780 with size: 92.045044 MiB 00:05:50.746 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71867_0 00:05:50.746 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:50.746 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71867_0 00:05:50.746 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:50.746 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71867_0 00:05:50.746 element at address: 0x2000191be940 with size: 20.255554 MiB 00:05:50.746 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:50.746 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:05:50.746 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:50.746 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:50.746 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71867_0 00:05:50.746 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:50.746 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71867 00:05:50.746 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:05:50.746 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71867 00:05:50.746 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:50.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:50.746 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:05:50.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:50.746 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:50.746 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:50.746 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:50.746 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:50.746 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:50.746 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71867 00:05:50.746 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:50.746 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71867 00:05:50.746 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:05:50.746 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71867 00:05:50.746 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:05:50.746 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71867 00:05:50.746 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:50.746 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71867 00:05:50.746 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:50.746 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71867 00:05:50.746 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:50.746 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:50.746 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:50.746 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:50.746 element at address: 0x20001907c540 with size: 0.250488 MiB 00:05:50.746 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:50.746 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:05:50.746 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71867 00:05:50.746 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:50.746 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71867 00:05:50.746 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:50.746 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:50.746 element at address: 0x200027a65680 with size: 0.023743 MiB 00:05:50.746 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:50.746 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:50.746 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71867 00:05:50.746 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:05:50.746 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:50.746 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:50.746 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71867 00:05:50.746 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:50.746 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71867 00:05:50.746 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:50.746 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71867 00:05:50.746 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:05:50.746 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:50.746 00:51:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:50.746 00:51:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71867 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71867 ']' 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71867 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71867 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71867' 00:05:50.746 killing process with pid 71867 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71867 00:05:50.746 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71867 00:05:51.006 00:05:51.006 real 0m1.592s 00:05:51.006 user 0m1.561s 00:05:51.006 sys 0m0.472s 00:05:51.006 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.006 ************************************ 00:05:51.006 END TEST dpdk_mem_utility 00:05:51.006 ************************************ 00:05:51.006 00:51:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.006 00:51:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.006 00:51:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.006 00:51:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.006 00:51:13 -- common/autotest_common.sh@10 -- # set +x 00:05:51.267 ************************************ 00:05:51.267 START TEST event 00:05:51.267 ************************************ 00:05:51.267 00:51:13 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.267 * Looking for test storage... 00:05:51.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.267 00:51:14 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.267 00:51:14 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.267 00:51:14 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.267 00:51:14 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.267 00:51:14 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.267 00:51:14 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.267 00:51:14 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.267 00:51:14 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.267 00:51:14 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.267 00:51:14 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.267 00:51:14 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.267 00:51:14 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.267 00:51:14 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.267 00:51:14 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.268 00:51:14 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.268 00:51:14 event -- scripts/common.sh@344 -- # case "$op" in 00:05:51.268 00:51:14 event -- scripts/common.sh@345 -- # : 1 00:05:51.268 00:51:14 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.268 00:51:14 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.268 00:51:14 event -- scripts/common.sh@365 -- # decimal 1 00:05:51.268 00:51:14 event -- scripts/common.sh@353 -- # local d=1 00:05:51.268 00:51:14 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.268 00:51:14 event -- scripts/common.sh@355 -- # echo 1 00:05:51.268 00:51:14 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.268 00:51:14 event -- scripts/common.sh@366 -- # decimal 2 00:05:51.268 00:51:14 event -- scripts/common.sh@353 -- # local d=2 00:05:51.268 00:51:14 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.268 00:51:14 event -- scripts/common.sh@355 -- # echo 2 00:05:51.268 00:51:14 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.268 00:51:14 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.268 00:51:14 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.268 00:51:14 event -- scripts/common.sh@368 -- # return 0 00:05:51.268 00:51:14 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.268 00:51:14 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.268 --rc genhtml_branch_coverage=1 00:05:51.268 --rc genhtml_function_coverage=1 00:05:51.268 --rc genhtml_legend=1 00:05:51.268 --rc geninfo_all_blocks=1 00:05:51.268 --rc geninfo_unexecuted_blocks=1 00:05:51.268 00:05:51.268 ' 00:05:51.268 00:51:14 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.268 --rc genhtml_branch_coverage=1 00:05:51.268 --rc genhtml_function_coverage=1 00:05:51.268 --rc genhtml_legend=1 00:05:51.268 --rc geninfo_all_blocks=1 00:05:51.268 --rc geninfo_unexecuted_blocks=1 00:05:51.268 00:05:51.268 ' 00:05:51.268 00:51:14 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.268 --rc genhtml_branch_coverage=1 00:05:51.268 --rc genhtml_function_coverage=1 00:05:51.268 --rc genhtml_legend=1 00:05:51.268 --rc geninfo_all_blocks=1 00:05:51.268 --rc geninfo_unexecuted_blocks=1 00:05:51.268 00:05:51.268 ' 00:05:51.268 00:51:14 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.268 --rc genhtml_branch_coverage=1 00:05:51.268 --rc genhtml_function_coverage=1 00:05:51.268 --rc genhtml_legend=1 00:05:51.268 --rc geninfo_all_blocks=1 00:05:51.268 --rc geninfo_unexecuted_blocks=1 00:05:51.268 00:05:51.268 ' 00:05:51.268 00:51:14 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:51.268 00:51:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.268 00:51:14 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.268 00:51:14 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:51.268 00:51:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.268 00:51:14 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.268 ************************************ 00:05:51.268 START TEST event_perf 00:05:51.268 ************************************ 00:05:51.268 00:51:14 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.268 Running I/O for 1 seconds...[2024-11-26 00:51:14.136320] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:51.268 [2024-11-26 00:51:14.136636] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71947 ] 00:05:51.529 [2024-11-26 00:51:14.272124] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.529 [2024-11-26 00:51:14.302000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.529 [2024-11-26 00:51:14.336781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.529 [2024-11-26 00:51:14.337094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.529 [2024-11-26 00:51:14.337372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.529 [2024-11-26 00:51:14.337499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.470 Running I/O for 1 seconds... 00:05:52.470 lcore 0: 138914 00:05:52.470 lcore 1: 138911 00:05:52.470 lcore 2: 138910 00:05:52.470 lcore 3: 138911 00:05:52.732 done. 00:05:52.732 00:05:52.732 real 0m1.303s 00:05:52.732 user 0m4.071s 00:05:52.732 sys 0m0.105s 00:05:52.732 00:51:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.732 ************************************ 00:05:52.732 END TEST event_perf 00:05:52.732 ************************************ 00:05:52.732 00:51:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.732 00:51:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:52.732 00:51:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:52.732 00:51:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.732 00:51:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.732 ************************************ 00:05:52.732 START TEST event_reactor 00:05:52.732 ************************************ 00:05:52.732 00:51:15 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:52.733 [2024-11-26 00:51:15.506790] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:52.733 [2024-11-26 00:51:15.507173] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71987 ] 00:05:52.733 [2024-11-26 00:51:15.639041] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.992 [2024-11-26 00:51:15.663942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.992 [2024-11-26 00:51:15.692487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.934 test_start 00:05:53.934 oneshot 00:05:53.934 tick 100 00:05:53.934 tick 100 00:05:53.934 tick 250 00:05:53.934 tick 100 00:05:53.934 tick 100 00:05:53.934 tick 100 00:05:53.934 tick 250 00:05:53.934 tick 500 00:05:53.934 tick 100 00:05:53.934 tick 100 00:05:53.934 tick 250 00:05:53.934 tick 100 00:05:53.934 tick 100 00:05:53.934 test_end 00:05:53.934 00:05:53.934 real 0m1.265s 00:05:53.934 user 0m1.076s 00:05:53.934 sys 0m0.082s 00:05:53.934 00:51:16 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.934 ************************************ 00:05:53.934 END TEST event_reactor 00:05:53.934 ************************************ 00:05:53.934 00:51:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:53.934 00:51:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.934 00:51:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:53.934 00:51:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.934 00:51:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.934 ************************************ 00:05:53.934 START TEST event_reactor_perf 00:05:53.934 ************************************ 00:05:53.934 00:51:16 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.934 [2024-11-26 00:51:16.828430] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:53.934 [2024-11-26 00:51:16.828553] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72018 ] 00:05:54.195 [2024-11-26 00:51:16.958151] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.195 [2024-11-26 00:51:16.989446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.195 [2024-11-26 00:51:17.008513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.130 test_start 00:05:55.130 test_end 00:05:55.130 Performance: 315560 events per second 00:05:55.389 ************************************ 00:05:55.389 END TEST event_reactor_perf 00:05:55.389 ************************************ 00:05:55.389 00:05:55.389 real 0m1.251s 00:05:55.389 user 0m1.075s 00:05:55.389 sys 0m0.068s 00:05:55.389 00:51:18 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.389 00:51:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.389 00:51:18 event -- event/event.sh@49 -- # uname -s 00:05:55.389 00:51:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:55.389 00:51:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:55.389 00:51:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.389 00:51:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.389 00:51:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.389 ************************************ 00:05:55.389 START TEST event_scheduler 00:05:55.389 ************************************ 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:55.389 * Looking for test storage... 00:05:55.389 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.389 00:51:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:55.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.389 --rc genhtml_branch_coverage=1 00:05:55.389 --rc genhtml_function_coverage=1 00:05:55.389 --rc genhtml_legend=1 00:05:55.389 --rc geninfo_all_blocks=1 00:05:55.389 --rc geninfo_unexecuted_blocks=1 00:05:55.389 00:05:55.389 ' 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:55.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.389 --rc genhtml_branch_coverage=1 00:05:55.389 --rc genhtml_function_coverage=1 00:05:55.389 --rc genhtml_legend=1 00:05:55.389 --rc geninfo_all_blocks=1 00:05:55.389 --rc geninfo_unexecuted_blocks=1 00:05:55.389 00:05:55.389 ' 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:55.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.389 --rc genhtml_branch_coverage=1 00:05:55.389 --rc genhtml_function_coverage=1 00:05:55.389 --rc genhtml_legend=1 00:05:55.389 --rc geninfo_all_blocks=1 00:05:55.389 --rc geninfo_unexecuted_blocks=1 00:05:55.389 00:05:55.389 ' 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:55.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.389 --rc genhtml_branch_coverage=1 00:05:55.389 --rc genhtml_function_coverage=1 00:05:55.389 --rc genhtml_legend=1 00:05:55.389 --rc geninfo_all_blocks=1 00:05:55.389 --rc geninfo_unexecuted_blocks=1 00:05:55.389 00:05:55.389 ' 00:05:55.389 00:51:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:55.389 00:51:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=72088 00:05:55.389 00:51:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.389 00:51:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 72088 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 72088 ']' 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.389 00:51:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:55.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.389 00:51:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.648 [2024-11-26 00:51:18.307626] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:55.648 [2024-11-26 00:51:18.307752] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72088 ] 00:05:55.648 [2024-11-26 00:51:18.439675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.648 [2024-11-26 00:51:18.465449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.648 [2024-11-26 00:51:18.487338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.648 [2024-11-26 00:51:18.487620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.648 [2024-11-26 00:51:18.487800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.648 [2024-11-26 00:51:18.487892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.584 00:51:19 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.584 00:51:19 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:56.584 00:51:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:56.584 00:51:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.584 00:51:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.584 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:56.584 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:56.584 POWER: intel_pstate driver is not supported 00:05:56.584 POWER: cppc_cpufreq driver is not supported 00:05:56.584 POWER: amd-pstate driver is not supported 00:05:56.584 POWER: acpi-cpufreq driver is not supported 00:05:56.584 POWER: Unable to set Power Management Environment for lcore 0 00:05:56.584 [2024-11-26 00:51:19.152995] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:56.584 [2024-11-26 00:51:19.153025] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:56.584 [2024-11-26 00:51:19.153037] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:56.584 [2024-11-26 00:51:19.153050] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:56.584 [2024-11-26 00:51:19.153072] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:56.584 [2024-11-26 00:51:19.153080] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:56.584 00:51:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.584 00:51:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:56.585 00:51:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 [2024-11-26 00:51:19.208748] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:56.585 00:51:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:56.585 00:51:19 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.585 00:51:19 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 ************************************ 00:05:56.585 START TEST scheduler_create_thread 00:05:56.585 ************************************ 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 2 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 3 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 4 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 5 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 6 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 7 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 8 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 9 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 10 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:56.585 00:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.525 ************************************ 00:05:57.525 END TEST scheduler_create_thread 00:05:57.525 ************************************ 00:05:57.525 00:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.525 00:05:57.525 real 0m1.169s 00:05:57.525 user 0m0.016s 00:05:57.525 sys 0m0.002s 00:05:57.525 00:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.525 00:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.786 00:51:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:57.786 00:51:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 72088 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 72088 ']' 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 72088 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72088 00:05:57.786 killing process with pid 72088 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72088' 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 72088 00:05:57.786 00:51:20 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 72088 00:05:58.047 [2024-11-26 00:51:20.874826] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:58.309 ************************************ 00:05:58.309 END TEST event_scheduler 00:05:58.309 ************************************ 00:05:58.309 00:05:58.309 real 0m2.924s 00:05:58.309 user 0m5.073s 00:05:58.309 sys 0m0.318s 00:05:58.309 00:51:21 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.309 00:51:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.309 00:51:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:58.309 00:51:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:58.309 00:51:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.309 00:51:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.309 00:51:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.309 ************************************ 00:05:58.309 START TEST app_repeat 00:05:58.309 ************************************ 00:05:58.309 00:51:21 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:58.309 Process app_repeat pid: 72172 00:05:58.309 spdk_app_start Round 0 00:05:58.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=72172 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 72172' 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72172 /var/tmp/spdk-nbd.sock 00:05:58.309 00:51:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72172 ']' 00:05:58.309 00:51:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.309 00:51:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.309 00:51:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.309 00:51:21 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:58.309 00:51:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.309 00:51:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:58.309 [2024-11-26 00:51:21.119535] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:05:58.309 [2024-11-26 00:51:21.119764] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72172 ] 00:05:58.570 [2024-11-26 00:51:21.250266] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.570 [2024-11-26 00:51:21.280520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:58.570 [2024-11-26 00:51:21.301278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.570 [2024-11-26 00:51:21.301332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.205 00:51:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.205 00:51:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.205 00:51:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.467 Malloc0 00:05:59.467 00:51:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.729 Malloc1 00:05:59.729 00:51:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.729 /dev/nbd0 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.729 00:51:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.729 00:51:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.729 00:51:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.729 00:51:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.729 00:51:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.729 00:51:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.729 00:51:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.729 00:51:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.729 00:51:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.729 00:51:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.729 1+0 records in 00:05:59.729 1+0 records out 00:05:59.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270404 s, 15.1 MB/s 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.991 /dev/nbd1 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.991 1+0 records in 00:05:59.991 1+0 records out 00:05:59.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363646 s, 11.3 MB/s 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.991 00:51:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.991 00:51:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.252 { 00:06:00.252 "nbd_device": "/dev/nbd0", 00:06:00.252 "bdev_name": "Malloc0" 00:06:00.252 }, 00:06:00.252 { 00:06:00.252 "nbd_device": "/dev/nbd1", 00:06:00.252 "bdev_name": "Malloc1" 00:06:00.252 } 00:06:00.252 ]' 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.252 { 00:06:00.252 "nbd_device": "/dev/nbd0", 00:06:00.252 "bdev_name": "Malloc0" 00:06:00.252 }, 00:06:00.252 { 00:06:00.252 "nbd_device": "/dev/nbd1", 00:06:00.252 "bdev_name": "Malloc1" 00:06:00.252 } 00:06:00.252 ]' 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.252 /dev/nbd1' 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.252 /dev/nbd1' 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.252 256+0 records in 00:06:00.252 256+0 records out 00:06:00.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00737732 s, 142 MB/s 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.252 256+0 records in 00:06:00.252 256+0 records out 00:06:00.252 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191191 s, 54.8 MB/s 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.252 00:51:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.513 256+0 records in 00:06:00.513 256+0 records out 00:06:00.513 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225345 s, 46.5 MB/s 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.513 00:51:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.514 00:51:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.775 00:51:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.036 00:51:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.036 00:51:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.296 00:51:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.296 [2024-11-26 00:51:24.155285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.296 [2024-11-26 00:51:24.174272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.296 [2024-11-26 00:51:24.174413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.296 [2024-11-26 00:51:24.205298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.296 [2024-11-26 00:51:24.205514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.577 spdk_app_start Round 1 00:06:04.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.577 00:51:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.577 00:51:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:04.577 00:51:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72172 /var/tmp/spdk-nbd.sock 00:06:04.577 00:51:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72172 ']' 00:06:04.577 00:51:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.577 00:51:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.577 00:51:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.577 00:51:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.577 00:51:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.577 00:51:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.577 00:51:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.577 00:51:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.577 Malloc0 00:06:04.835 00:51:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.835 Malloc1 00:06:04.835 00:51:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.835 00:51:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.093 /dev/nbd0 00:06:05.093 00:51:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.093 00:51:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.093 1+0 records in 00:06:05.093 1+0 records out 00:06:05.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000138631 s, 29.5 MB/s 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.093 00:51:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.093 00:51:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.093 00:51:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.093 00:51:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.352 /dev/nbd1 00:06:05.352 00:51:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.352 00:51:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.352 1+0 records in 00:06:05.352 1+0 records out 00:06:05.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274742 s, 14.9 MB/s 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.352 00:51:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.352 00:51:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.352 00:51:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.352 00:51:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.352 00:51:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.352 00:51:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.611 { 00:06:05.611 "nbd_device": "/dev/nbd0", 00:06:05.611 "bdev_name": "Malloc0" 00:06:05.611 }, 00:06:05.611 { 00:06:05.611 "nbd_device": "/dev/nbd1", 00:06:05.611 "bdev_name": "Malloc1" 00:06:05.611 } 00:06:05.611 ]' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.611 { 00:06:05.611 "nbd_device": "/dev/nbd0", 00:06:05.611 "bdev_name": "Malloc0" 00:06:05.611 }, 00:06:05.611 { 00:06:05.611 "nbd_device": "/dev/nbd1", 00:06:05.611 "bdev_name": "Malloc1" 00:06:05.611 } 00:06:05.611 ]' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.611 /dev/nbd1' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.611 /dev/nbd1' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.611 256+0 records in 00:06:05.611 256+0 records out 00:06:05.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00700627 s, 150 MB/s 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.611 256+0 records in 00:06:05.611 256+0 records out 00:06:05.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147905 s, 70.9 MB/s 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.611 256+0 records in 00:06:05.611 256+0 records out 00:06:05.611 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161651 s, 64.9 MB/s 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.611 00:51:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.870 00:51:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.128 00:51:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.387 00:51:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.387 00:51:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.645 00:51:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.645 [2024-11-26 00:51:29.398641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.645 [2024-11-26 00:51:29.414519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.645 [2024-11-26 00:51:29.414522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.645 [2024-11-26 00:51:29.443432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.645 [2024-11-26 00:51:29.443479] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.013 spdk_app_start Round 2 00:06:10.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.013 00:51:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.013 00:51:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:10.013 00:51:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 72172 /var/tmp/spdk-nbd.sock 00:06:10.013 00:51:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72172 ']' 00:06:10.013 00:51:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.013 00:51:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.013 00:51:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.013 00:51:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.013 00:51:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.013 00:51:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.013 00:51:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.013 00:51:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.013 Malloc0 00:06:10.013 00:51:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.270 Malloc1 00:06:10.270 00:51:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.270 00:51:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.270 00:51:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.270 00:51:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.270 00:51:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.270 00:51:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.270 00:51:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.270 00:51:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.270 00:51:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.271 00:51:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.271 00:51:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.271 00:51:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.271 00:51:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.271 00:51:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.271 00:51:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.271 00:51:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.271 /dev/nbd0 00:06:10.271 00:51:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.271 00:51:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.271 1+0 records in 00:06:10.271 1+0 records out 00:06:10.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269181 s, 15.2 MB/s 00:06:10.271 00:51:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.528 00:51:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.528 00:51:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.528 00:51:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.528 00:51:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.528 00:51:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.528 00:51:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.528 00:51:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.528 /dev/nbd1 00:06:10.528 00:51:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.528 00:51:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.528 00:51:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:10.528 00:51:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:10.528 00:51:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.529 1+0 records in 00:06:10.529 1+0 records out 00:06:10.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020093 s, 20.4 MB/s 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:10.529 00:51:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:10.529 00:51:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.529 00:51:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.529 00:51:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.529 00:51:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.529 00:51:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.787 { 00:06:10.787 "nbd_device": "/dev/nbd0", 00:06:10.787 "bdev_name": "Malloc0" 00:06:10.787 }, 00:06:10.787 { 00:06:10.787 "nbd_device": "/dev/nbd1", 00:06:10.787 "bdev_name": "Malloc1" 00:06:10.787 } 00:06:10.787 ]' 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.787 { 00:06:10.787 "nbd_device": "/dev/nbd0", 00:06:10.787 "bdev_name": "Malloc0" 00:06:10.787 }, 00:06:10.787 { 00:06:10.787 "nbd_device": "/dev/nbd1", 00:06:10.787 "bdev_name": "Malloc1" 00:06:10.787 } 00:06:10.787 ]' 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.787 /dev/nbd1' 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.787 /dev/nbd1' 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.787 256+0 records in 00:06:10.787 256+0 records out 00:06:10.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0093664 s, 112 MB/s 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.787 256+0 records in 00:06:10.787 256+0 records out 00:06:10.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011849 s, 88.5 MB/s 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.787 00:51:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.045 256+0 records in 00:06:11.045 256+0 records out 00:06:11.045 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150845 s, 69.5 MB/s 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.045 00:51:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.303 00:51:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.561 00:51:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.561 00:51:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.819 00:51:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.819 [2024-11-26 00:51:34.655705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.819 [2024-11-26 00:51:34.670765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.819 [2024-11-26 00:51:34.670771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.819 [2024-11-26 00:51:34.699154] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.819 [2024-11-26 00:51:34.699199] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.100 00:51:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 72172 /var/tmp/spdk-nbd.sock 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 72172 ']' 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:15.100 00:51:37 event.app_repeat -- event/event.sh@39 -- # killprocess 72172 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 72172 ']' 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 72172 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72172 00:06:15.100 killing process with pid 72172 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72172' 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@973 -- # kill 72172 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@978 -- # wait 72172 00:06:15.100 spdk_app_start is called in Round 0. 00:06:15.100 Shutdown signal received, stop current app iteration 00:06:15.100 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 reinitialization... 00:06:15.100 spdk_app_start is called in Round 1. 00:06:15.100 Shutdown signal received, stop current app iteration 00:06:15.100 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 reinitialization... 00:06:15.100 spdk_app_start is called in Round 2. 00:06:15.100 Shutdown signal received, stop current app iteration 00:06:15.100 Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 reinitialization... 00:06:15.100 spdk_app_start is called in Round 3. 00:06:15.100 Shutdown signal received, stop current app iteration 00:06:15.100 00:51:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:15.100 00:51:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:15.100 00:06:15.100 real 0m16.847s 00:06:15.100 user 0m37.705s 00:06:15.100 sys 0m2.065s 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.100 ************************************ 00:06:15.100 END TEST app_repeat 00:06:15.100 ************************************ 00:06:15.100 00:51:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.100 00:51:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:15.100 00:51:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:15.100 00:51:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.100 00:51:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.100 00:51:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.100 ************************************ 00:06:15.100 START TEST cpu_locks 00:06:15.100 ************************************ 00:06:15.100 00:51:37 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:15.359 * Looking for test storage... 00:06:15.359 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.359 00:51:38 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.359 --rc genhtml_branch_coverage=1 00:06:15.359 --rc genhtml_function_coverage=1 00:06:15.359 --rc genhtml_legend=1 00:06:15.359 --rc geninfo_all_blocks=1 00:06:15.359 --rc geninfo_unexecuted_blocks=1 00:06:15.359 00:06:15.359 ' 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.359 --rc genhtml_branch_coverage=1 00:06:15.359 --rc genhtml_function_coverage=1 00:06:15.359 --rc genhtml_legend=1 00:06:15.359 --rc geninfo_all_blocks=1 00:06:15.359 --rc geninfo_unexecuted_blocks=1 00:06:15.359 00:06:15.359 ' 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.359 --rc genhtml_branch_coverage=1 00:06:15.359 --rc genhtml_function_coverage=1 00:06:15.359 --rc genhtml_legend=1 00:06:15.359 --rc geninfo_all_blocks=1 00:06:15.359 --rc geninfo_unexecuted_blocks=1 00:06:15.359 00:06:15.359 ' 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.359 --rc genhtml_branch_coverage=1 00:06:15.359 --rc genhtml_function_coverage=1 00:06:15.359 --rc genhtml_legend=1 00:06:15.359 --rc geninfo_all_blocks=1 00:06:15.359 --rc geninfo_unexecuted_blocks=1 00:06:15.359 00:06:15.359 ' 00:06:15.359 00:51:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.359 00:51:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.359 00:51:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.359 00:51:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.359 00:51:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 ************************************ 00:06:15.359 START TEST default_locks 00:06:15.359 ************************************ 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72592 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72592 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72592 ']' 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.359 00:51:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.359 [2024-11-26 00:51:38.231971] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:15.359 [2024-11-26 00:51:38.232215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72592 ] 00:06:15.619 [2024-11-26 00:51:38.364421] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.619 [2024-11-26 00:51:38.391379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.619 [2024-11-26 00:51:38.409635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.186 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.186 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:16.186 00:51:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72592 00:06:16.186 00:51:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72592 00:06:16.186 00:51:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.464 00:51:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72592 00:06:16.464 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 72592 ']' 00:06:16.464 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 72592 00:06:16.465 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:16.465 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.465 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72592 00:06:16.465 killing process with pid 72592 00:06:16.465 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.465 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.465 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72592' 00:06:16.465 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 72592 00:06:16.465 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 72592 00:06:16.736 00:51:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72592 00:06:16.736 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:16.736 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72592 00:06:16.736 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:16.736 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.736 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:16.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.736 ERROR: process (pid: 72592) is no longer running 00:06:16.736 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.736 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 72592 00:06:16.736 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 72592 ']' 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.737 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72592) - No such process 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.737 ************************************ 00:06:16.737 END TEST default_locks 00:06:16.737 ************************************ 00:06:16.737 00:06:16.737 real 0m1.320s 00:06:16.737 user 0m1.342s 00:06:16.737 sys 0m0.400s 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.737 00:51:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.737 00:51:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:16.737 00:51:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.737 00:51:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.737 00:51:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.737 ************************************ 00:06:16.737 START TEST default_locks_via_rpc 00:06:16.737 ************************************ 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72634 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72634 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72634 ']' 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.737 00:51:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.737 [2024-11-26 00:51:39.586786] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:16.737 [2024-11-26 00:51:39.586913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72634 ] 00:06:16.996 [2024-11-26 00:51:39.718386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.996 [2024-11-26 00:51:39.749376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.996 [2024-11-26 00:51:39.767658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72634 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72634 00:06:17.562 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72634 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 72634 ']' 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 72634 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72634 00:06:17.823 killing process with pid 72634 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72634' 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 72634 00:06:17.823 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 72634 00:06:18.083 00:06:18.083 real 0m1.348s 00:06:18.083 user 0m1.376s 00:06:18.083 sys 0m0.399s 00:06:18.083 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.083 ************************************ 00:06:18.083 END TEST default_locks_via_rpc 00:06:18.083 ************************************ 00:06:18.083 00:51:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.084 00:51:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:18.084 00:51:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.084 00:51:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.084 00:51:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.084 ************************************ 00:06:18.084 START TEST non_locking_app_on_locked_coremask 00:06:18.084 ************************************ 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72686 00:06:18.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72686 /var/tmp/spdk.sock 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72686 ']' 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.084 00:51:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.084 [2024-11-26 00:51:40.983266] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:18.084 [2024-11-26 00:51:40.983381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72686 ] 00:06:18.383 [2024-11-26 00:51:41.113952] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.383 [2024-11-26 00:51:41.140106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.383 [2024-11-26 00:51:41.158261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72696 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72696 /var/tmp/spdk2.sock 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72696 ']' 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.951 00:51:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.212 [2024-11-26 00:51:41.882310] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:19.212 [2024-11-26 00:51:41.882587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72696 ] 00:06:19.212 [2024-11-26 00:51:42.015561] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:19.212 [2024-11-26 00:51:42.057567] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.212 [2024-11-26 00:51:42.057607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.212 [2024-11-26 00:51:42.096491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72686 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72686 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72686 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72686 ']' 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72686 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72686 00:06:20.155 killing process with pid 72686 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72686' 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72686 00:06:20.155 00:51:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72686 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72696 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72696 ']' 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72696 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72696 00:06:20.722 killing process with pid 72696 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72696' 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72696 00:06:20.722 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72696 00:06:20.982 00:06:20.983 real 0m2.776s 00:06:20.983 user 0m3.089s 00:06:20.983 sys 0m0.750s 00:06:20.983 ************************************ 00:06:20.983 END TEST non_locking_app_on_locked_coremask 00:06:20.983 ************************************ 00:06:20.983 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.983 00:51:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.983 00:51:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:20.983 00:51:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.983 00:51:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.983 00:51:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.983 ************************************ 00:06:20.983 START TEST locking_app_on_unlocked_coremask 00:06:20.983 ************************************ 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72749 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72749 /var/tmp/spdk.sock 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72749 ']' 00:06:20.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.983 00:51:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:20.983 [2024-11-26 00:51:43.818682] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:20.983 [2024-11-26 00:51:43.818948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72749 ] 00:06:21.242 [2024-11-26 00:51:43.951207] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.242 [2024-11-26 00:51:43.975559] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.242 [2024-11-26 00:51:43.975585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.242 [2024-11-26 00:51:43.992418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72765 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72765 /var/tmp/spdk2.sock 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72765 ']' 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.808 00:51:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.808 [2024-11-26 00:51:44.713680] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:21.808 [2024-11-26 00:51:44.714282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72765 ] 00:06:22.067 [2024-11-26 00:51:44.846504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.067 [2024-11-26 00:51:44.879411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.067 [2024-11-26 00:51:44.912447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72765 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72765 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72749 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72749 ']' 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72749 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72749 00:06:23.002 killing process with pid 72749 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72749' 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72749 00:06:23.002 00:51:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72749 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72765 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72765 ']' 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72765 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72765 00:06:23.569 killing process with pid 72765 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72765' 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72765 00:06:23.569 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72765 00:06:23.830 ************************************ 00:06:23.830 END TEST locking_app_on_unlocked_coremask 00:06:23.830 ************************************ 00:06:23.830 00:06:23.830 real 0m2.806s 00:06:23.830 user 0m3.120s 00:06:23.830 sys 0m0.757s 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.830 00:51:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:23.830 00:51:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.830 00:51:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.830 00:51:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.830 ************************************ 00:06:23.830 START TEST locking_app_on_locked_coremask 00:06:23.830 ************************************ 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:23.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72823 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72823 /var/tmp/spdk.sock 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72823 ']' 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.830 00:51:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.830 [2024-11-26 00:51:46.689339] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:23.830 [2024-11-26 00:51:46.689603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72823 ] 00:06:24.089 [2024-11-26 00:51:46.823893] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.089 [2024-11-26 00:51:46.848806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.089 [2024-11-26 00:51:46.873106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72839 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72839 /var/tmp/spdk2.sock 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72839 /var/tmp/spdk2.sock 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72839 /var/tmp/spdk2.sock 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72839 ']' 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.657 00:51:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.915 [2024-11-26 00:51:47.611027] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:24.915 [2024-11-26 00:51:47.611444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72839 ] 00:06:24.915 [2024-11-26 00:51:47.745489] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.916 [2024-11-26 00:51:47.778383] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72823 has claimed it. 00:06:24.916 [2024-11-26 00:51:47.778423] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.482 ERROR: process (pid: 72839) is no longer running 00:06:25.482 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72839) - No such process 00:06:25.482 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.482 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:25.482 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:25.482 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.482 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.482 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.482 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72823 00:06:25.482 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.482 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72823 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72823 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72823 ']' 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72823 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72823 00:06:25.741 killing process with pid 72823 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72823' 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72823 00:06:25.741 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72823 00:06:26.001 00:06:26.001 real 0m2.068s 00:06:26.001 user 0m2.335s 00:06:26.001 sys 0m0.501s 00:06:26.001 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.001 00:51:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.001 ************************************ 00:06:26.001 END TEST locking_app_on_locked_coremask 00:06:26.001 ************************************ 00:06:26.001 00:51:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:26.001 00:51:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.001 00:51:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.001 00:51:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.001 ************************************ 00:06:26.001 START TEST locking_overlapped_coremask 00:06:26.002 ************************************ 00:06:26.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72881 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72881 /var/tmp/spdk.sock 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72881 ']' 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.002 00:51:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.002 [2024-11-26 00:51:48.807225] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:26.002 [2024-11-26 00:51:48.807343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72881 ] 00:06:26.263 [2024-11-26 00:51:48.941615] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.263 [2024-11-26 00:51:48.970346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.263 [2024-11-26 00:51:48.998506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.263 [2024-11-26 00:51:48.998784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.263 [2024-11-26 00:51:48.998784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72899 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72899 /var/tmp/spdk2.sock 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72899 /var/tmp/spdk2.sock 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:26.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72899 /var/tmp/spdk2.sock 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72899 ']' 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.836 00:51:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.836 [2024-11-26 00:51:49.724147] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:26.836 [2024-11-26 00:51:49.724253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72899 ] 00:06:27.120 [2024-11-26 00:51:49.858121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.121 [2024-11-26 00:51:49.899585] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72881 has claimed it. 00:06:27.121 [2024-11-26 00:51:49.899628] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.702 ERROR: process (pid: 72899) is no longer running 00:06:27.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72899) - No such process 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72881 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72881 ']' 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72881 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72881 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72881' 00:06:27.702 killing process with pid 72881 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72881 00:06:27.702 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72881 00:06:27.964 00:06:27.964 real 0m1.913s 00:06:27.964 user 0m5.161s 00:06:27.964 sys 0m0.479s 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.964 ************************************ 00:06:27.964 END TEST locking_overlapped_coremask 00:06:27.964 ************************************ 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.964 00:51:50 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:27.964 00:51:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.964 00:51:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.964 00:51:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.964 ************************************ 00:06:27.964 START TEST locking_overlapped_coremask_via_rpc 00:06:27.964 ************************************ 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72941 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72941 /var/tmp/spdk.sock 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72941 ']' 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.964 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.965 00:51:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:27.965 [2024-11-26 00:51:50.783646] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:27.965 [2024-11-26 00:51:50.783907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72941 ] 00:06:28.227 [2024-11-26 00:51:50.916507] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.227 [2024-11-26 00:51:50.944317] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.227 [2024-11-26 00:51:50.944359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.227 [2024-11-26 00:51:50.965065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.227 [2024-11-26 00:51:50.965262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.227 [2024-11-26 00:51:50.965337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72959 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72959 /var/tmp/spdk2.sock 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72959 ']' 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.799 00:51:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.799 [2024-11-26 00:51:51.679636] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:28.799 [2024-11-26 00:51:51.679931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72959 ] 00:06:29.060 [2024-11-26 00:51:51.814312] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.060 [2024-11-26 00:51:51.859850] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.060 [2024-11-26 00:51:51.859892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.060 [2024-11-26 00:51:51.902493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.060 [2024-11-26 00:51:51.906043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.060 [2024-11-26 00:51:51.906122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.633 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.891 [2024-11-26 00:51:52.548988] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72941 has claimed it. 00:06:29.891 request: 00:06:29.891 { 00:06:29.891 "method": "framework_enable_cpumask_locks", 00:06:29.891 "req_id": 1 00:06:29.891 } 00:06:29.891 Got JSON-RPC error response 00:06:29.891 response: 00:06:29.891 { 00:06:29.891 "code": -32603, 00:06:29.891 "message": "Failed to claim CPU core: 2" 00:06:29.891 } 00:06:29.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72941 /var/tmp/spdk.sock 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72941 ']' 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72959 /var/tmp/spdk2.sock 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72959 ']' 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.892 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.151 ************************************ 00:06:30.151 END TEST locking_overlapped_coremask_via_rpc 00:06:30.151 ************************************ 00:06:30.151 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.151 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:30.151 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.151 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.151 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.152 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.152 00:06:30.152 real 0m2.266s 00:06:30.152 user 0m1.072s 00:06:30.152 sys 0m0.127s 00:06:30.152 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.152 00:51:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.152 00:51:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.152 00:51:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72941 ]] 00:06:30.152 00:51:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72941 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72941 ']' 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72941 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72941 00:06:30.152 killing process with pid 72941 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72941' 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72941 00:06:30.152 00:51:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72941 00:06:30.411 00:51:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72959 ]] 00:06:30.411 00:51:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72959 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72959 ']' 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72959 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72959 00:06:30.411 killing process with pid 72959 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72959' 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72959 00:06:30.411 00:51:53 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72959 00:06:30.671 00:51:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.671 00:51:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:30.671 00:51:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72941 ]] 00:06:30.671 00:51:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72941 00:06:30.671 00:51:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72941 ']' 00:06:30.671 Process with pid 72941 is not found 00:06:30.671 Process with pid 72959 is not found 00:06:30.671 00:51:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72941 00:06:30.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72941) - No such process 00:06:30.671 00:51:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72941 is not found' 00:06:30.671 00:51:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72959 ]] 00:06:30.671 00:51:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72959 00:06:30.671 00:51:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72959 ']' 00:06:30.671 00:51:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72959 00:06:30.671 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72959) - No such process 00:06:30.671 00:51:53 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72959 is not found' 00:06:30.671 00:51:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.671 ************************************ 00:06:30.671 END TEST cpu_locks 00:06:30.671 ************************************ 00:06:30.671 00:06:30.671 real 0m15.511s 00:06:30.671 user 0m27.465s 00:06:30.671 sys 0m4.189s 00:06:30.671 00:51:53 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.671 00:51:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.671 ************************************ 00:06:30.671 END TEST event 00:06:30.671 ************************************ 00:06:30.671 00:06:30.671 real 0m39.616s 00:06:30.671 user 1m16.629s 00:06:30.671 sys 0m7.075s 00:06:30.671 00:51:53 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.671 00:51:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.671 00:51:53 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:30.671 00:51:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.671 00:51:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.671 00:51:53 -- common/autotest_common.sh@10 -- # set +x 00:06:30.932 ************************************ 00:06:30.932 START TEST thread 00:06:30.932 ************************************ 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:30.932 * Looking for test storage... 00:06:30.932 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.932 00:51:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.932 00:51:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.932 00:51:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.932 00:51:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.932 00:51:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.932 00:51:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.932 00:51:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.932 00:51:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.932 00:51:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.932 00:51:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.932 00:51:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.932 00:51:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:30.932 00:51:53 thread -- scripts/common.sh@345 -- # : 1 00:06:30.932 00:51:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.932 00:51:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.932 00:51:53 thread -- scripts/common.sh@365 -- # decimal 1 00:06:30.932 00:51:53 thread -- scripts/common.sh@353 -- # local d=1 00:06:30.932 00:51:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.932 00:51:53 thread -- scripts/common.sh@355 -- # echo 1 00:06:30.932 00:51:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.932 00:51:53 thread -- scripts/common.sh@366 -- # decimal 2 00:06:30.932 00:51:53 thread -- scripts/common.sh@353 -- # local d=2 00:06:30.932 00:51:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.932 00:51:53 thread -- scripts/common.sh@355 -- # echo 2 00:06:30.932 00:51:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.932 00:51:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.932 00:51:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.932 00:51:53 thread -- scripts/common.sh@368 -- # return 0 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.932 --rc genhtml_branch_coverage=1 00:06:30.932 --rc genhtml_function_coverage=1 00:06:30.932 --rc genhtml_legend=1 00:06:30.932 --rc geninfo_all_blocks=1 00:06:30.932 --rc geninfo_unexecuted_blocks=1 00:06:30.932 00:06:30.932 ' 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.932 --rc genhtml_branch_coverage=1 00:06:30.932 --rc genhtml_function_coverage=1 00:06:30.932 --rc genhtml_legend=1 00:06:30.932 --rc geninfo_all_blocks=1 00:06:30.932 --rc geninfo_unexecuted_blocks=1 00:06:30.932 00:06:30.932 ' 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.932 --rc genhtml_branch_coverage=1 00:06:30.932 --rc genhtml_function_coverage=1 00:06:30.932 --rc genhtml_legend=1 00:06:30.932 --rc geninfo_all_blocks=1 00:06:30.932 --rc geninfo_unexecuted_blocks=1 00:06:30.932 00:06:30.932 ' 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.932 --rc genhtml_branch_coverage=1 00:06:30.932 --rc genhtml_function_coverage=1 00:06:30.932 --rc genhtml_legend=1 00:06:30.932 --rc geninfo_all_blocks=1 00:06:30.932 --rc geninfo_unexecuted_blocks=1 00:06:30.932 00:06:30.932 ' 00:06:30.932 00:51:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.932 00:51:53 thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.932 ************************************ 00:06:30.932 START TEST thread_poller_perf 00:06:30.932 ************************************ 00:06:30.932 00:51:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:30.932 [2024-11-26 00:51:53.766122] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:30.932 [2024-11-26 00:51:53.766690] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73086 ] 00:06:31.190 [2024-11-26 00:51:53.896608] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.190 [2024-11-26 00:51:53.923582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.190 [2024-11-26 00:51:53.939951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.190 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.124 [2024-11-26T00:51:55.041Z] ====================================== 00:06:32.124 [2024-11-26T00:51:55.041Z] busy:2607661280 (cyc) 00:06:32.124 [2024-11-26T00:51:55.041Z] total_run_count: 411000 00:06:32.124 [2024-11-26T00:51:55.041Z] tsc_hz: 2600000000 (cyc) 00:06:32.124 [2024-11-26T00:51:55.041Z] ====================================== 00:06:32.124 [2024-11-26T00:51:55.041Z] poller_cost: 6344 (cyc), 2440 (nsec) 00:06:32.124 00:06:32.124 real 0m1.248s 00:06:32.124 user 0m1.082s 00:06:32.124 sys 0m0.058s 00:06:32.124 00:51:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.124 00:51:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.124 ************************************ 00:06:32.124 END TEST thread_poller_perf 00:06:32.124 ************************************ 00:06:32.124 00:51:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.124 00:51:55 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:32.124 00:51:55 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.124 00:51:55 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.124 ************************************ 00:06:32.124 START TEST thread_poller_perf 00:06:32.124 ************************************ 00:06:32.124 00:51:55 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.382 [2024-11-26 00:51:55.058774] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:32.382 [2024-11-26 00:51:55.058949] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73117 ] 00:06:32.382 [2024-11-26 00:51:55.182167] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.382 [2024-11-26 00:51:55.207647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.382 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:32.382 [2024-11-26 00:51:55.223772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.760 [2024-11-26T00:51:56.677Z] ====================================== 00:06:33.760 [2024-11-26T00:51:56.677Z] busy:2602449122 (cyc) 00:06:33.760 [2024-11-26T00:51:56.677Z] total_run_count: 5339000 00:06:33.760 [2024-11-26T00:51:56.677Z] tsc_hz: 2600000000 (cyc) 00:06:33.760 [2024-11-26T00:51:56.677Z] ====================================== 00:06:33.760 [2024-11-26T00:51:56.677Z] poller_cost: 487 (cyc), 187 (nsec) 00:06:33.760 ************************************ 00:06:33.760 END TEST thread_poller_perf 00:06:33.760 ************************************ 00:06:33.760 00:06:33.760 real 0m1.231s 00:06:33.760 user 0m1.071s 00:06:33.760 sys 0m0.055s 00:06:33.760 00:51:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.760 00:51:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.760 00:51:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:33.760 ************************************ 00:06:33.760 END TEST thread 00:06:33.760 ************************************ 00:06:33.760 00:06:33.760 real 0m2.708s 00:06:33.760 user 0m2.255s 00:06:33.760 sys 0m0.231s 00:06:33.760 00:51:56 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.760 00:51:56 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.760 00:51:56 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:33.760 00:51:56 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.760 00:51:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.760 00:51:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.760 00:51:56 -- common/autotest_common.sh@10 -- # set +x 00:06:33.760 ************************************ 00:06:33.760 START TEST app_cmdline 00:06:33.760 ************************************ 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:33.760 * Looking for test storage... 00:06:33.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:33.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.760 00:51:56 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.760 --rc genhtml_branch_coverage=1 00:06:33.760 --rc genhtml_function_coverage=1 00:06:33.760 --rc genhtml_legend=1 00:06:33.760 --rc geninfo_all_blocks=1 00:06:33.760 --rc geninfo_unexecuted_blocks=1 00:06:33.760 00:06:33.760 ' 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.760 --rc genhtml_branch_coverage=1 00:06:33.760 --rc genhtml_function_coverage=1 00:06:33.760 --rc genhtml_legend=1 00:06:33.760 --rc geninfo_all_blocks=1 00:06:33.760 --rc geninfo_unexecuted_blocks=1 00:06:33.760 00:06:33.760 ' 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.760 --rc genhtml_branch_coverage=1 00:06:33.760 --rc genhtml_function_coverage=1 00:06:33.760 --rc genhtml_legend=1 00:06:33.760 --rc geninfo_all_blocks=1 00:06:33.760 --rc geninfo_unexecuted_blocks=1 00:06:33.760 00:06:33.760 ' 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.760 --rc genhtml_branch_coverage=1 00:06:33.760 --rc genhtml_function_coverage=1 00:06:33.760 --rc genhtml_legend=1 00:06:33.760 --rc geninfo_all_blocks=1 00:06:33.760 --rc geninfo_unexecuted_blocks=1 00:06:33.760 00:06:33.760 ' 00:06:33.760 00:51:56 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:33.760 00:51:56 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73206 00:06:33.760 00:51:56 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73206 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 73206 ']' 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.760 00:51:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:33.760 00:51:56 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:33.760 [2024-11-26 00:51:56.584430] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:33.760 [2024-11-26 00:51:56.584544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73206 ] 00:06:34.019 [2024-11-26 00:51:56.718564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.019 [2024-11-26 00:51:56.742923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.019 [2024-11-26 00:51:56.766322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.587 00:51:57 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.587 00:51:57 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:34.587 00:51:57 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:34.846 { 00:06:34.846 "version": "SPDK v25.01-pre git sha1 2a91567e4", 00:06:34.846 "fields": { 00:06:34.846 "major": 25, 00:06:34.846 "minor": 1, 00:06:34.846 "patch": 0, 00:06:34.846 "suffix": "-pre", 00:06:34.846 "commit": "2a91567e4" 00:06:34.846 } 00:06:34.846 } 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:34.846 00:51:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:34.846 00:51:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.104 request: 00:06:35.104 { 00:06:35.104 "method": "env_dpdk_get_mem_stats", 00:06:35.104 "req_id": 1 00:06:35.104 } 00:06:35.104 Got JSON-RPC error response 00:06:35.104 response: 00:06:35.104 { 00:06:35.104 "code": -32601, 00:06:35.104 "message": "Method not found" 00:06:35.104 } 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.104 00:51:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73206 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 73206 ']' 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 73206 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73206 00:06:35.104 killing process with pid 73206 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73206' 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@973 -- # kill 73206 00:06:35.104 00:51:57 app_cmdline -- common/autotest_common.sh@978 -- # wait 73206 00:06:35.363 00:06:35.363 real 0m1.725s 00:06:35.363 user 0m2.083s 00:06:35.363 sys 0m0.381s 00:06:35.363 00:51:58 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.363 ************************************ 00:06:35.363 END TEST app_cmdline 00:06:35.363 00:51:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.363 ************************************ 00:06:35.363 00:51:58 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.363 00:51:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.363 00:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.364 00:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:35.364 ************************************ 00:06:35.364 START TEST version 00:06:35.364 ************************************ 00:06:35.364 00:51:58 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:35.364 * Looking for test storage... 00:06:35.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:35.364 00:51:58 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.364 00:51:58 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.364 00:51:58 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.364 00:51:58 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.364 00:51:58 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.364 00:51:58 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.364 00:51:58 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.364 00:51:58 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.364 00:51:58 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.364 00:51:58 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.364 00:51:58 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.364 00:51:58 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.364 00:51:58 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.364 00:51:58 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.364 00:51:58 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.364 00:51:58 version -- scripts/common.sh@344 -- # case "$op" in 00:06:35.364 00:51:58 version -- scripts/common.sh@345 -- # : 1 00:06:35.364 00:51:58 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.364 00:51:58 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.364 00:51:58 version -- scripts/common.sh@365 -- # decimal 1 00:06:35.364 00:51:58 version -- scripts/common.sh@353 -- # local d=1 00:06:35.364 00:51:58 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.364 00:51:58 version -- scripts/common.sh@355 -- # echo 1 00:06:35.364 00:51:58 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.364 00:51:58 version -- scripts/common.sh@366 -- # decimal 2 00:06:35.623 00:51:58 version -- scripts/common.sh@353 -- # local d=2 00:06:35.624 00:51:58 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.624 00:51:58 version -- scripts/common.sh@355 -- # echo 2 00:06:35.624 00:51:58 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.624 00:51:58 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.624 00:51:58 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.624 00:51:58 version -- scripts/common.sh@368 -- # return 0 00:06:35.624 00:51:58 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.624 00:51:58 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.624 --rc genhtml_branch_coverage=1 00:06:35.624 --rc genhtml_function_coverage=1 00:06:35.624 --rc genhtml_legend=1 00:06:35.624 --rc geninfo_all_blocks=1 00:06:35.624 --rc geninfo_unexecuted_blocks=1 00:06:35.624 00:06:35.624 ' 00:06:35.624 00:51:58 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.624 --rc genhtml_branch_coverage=1 00:06:35.624 --rc genhtml_function_coverage=1 00:06:35.624 --rc genhtml_legend=1 00:06:35.624 --rc geninfo_all_blocks=1 00:06:35.624 --rc geninfo_unexecuted_blocks=1 00:06:35.624 00:06:35.624 ' 00:06:35.624 00:51:58 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.624 --rc genhtml_branch_coverage=1 00:06:35.624 --rc genhtml_function_coverage=1 00:06:35.624 --rc genhtml_legend=1 00:06:35.624 --rc geninfo_all_blocks=1 00:06:35.624 --rc geninfo_unexecuted_blocks=1 00:06:35.624 00:06:35.624 ' 00:06:35.624 00:51:58 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.624 --rc genhtml_branch_coverage=1 00:06:35.624 --rc genhtml_function_coverage=1 00:06:35.624 --rc genhtml_legend=1 00:06:35.624 --rc geninfo_all_blocks=1 00:06:35.624 --rc geninfo_unexecuted_blocks=1 00:06:35.624 00:06:35.624 ' 00:06:35.624 00:51:58 version -- app/version.sh@17 -- # get_header_version major 00:06:35.624 00:51:58 version -- app/version.sh@14 -- # cut -f2 00:06:35.624 00:51:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.624 00:51:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.624 00:51:58 version -- app/version.sh@17 -- # major=25 00:06:35.624 00:51:58 version -- app/version.sh@18 -- # get_header_version minor 00:06:35.624 00:51:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.624 00:51:58 version -- app/version.sh@14 -- # cut -f2 00:06:35.624 00:51:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.624 00:51:58 version -- app/version.sh@18 -- # minor=1 00:06:35.624 00:51:58 version -- app/version.sh@19 -- # get_header_version patch 00:06:35.624 00:51:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.624 00:51:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.624 00:51:58 version -- app/version.sh@14 -- # cut -f2 00:06:35.624 00:51:58 version -- app/version.sh@19 -- # patch=0 00:06:35.624 00:51:58 version -- app/version.sh@20 -- # get_header_version suffix 00:06:35.624 00:51:58 version -- app/version.sh@14 -- # cut -f2 00:06:35.624 00:51:58 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:35.624 00:51:58 version -- app/version.sh@14 -- # tr -d '"' 00:06:35.624 00:51:58 version -- app/version.sh@20 -- # suffix=-pre 00:06:35.624 00:51:58 version -- app/version.sh@22 -- # version=25.1 00:06:35.624 00:51:58 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:35.624 00:51:58 version -- app/version.sh@28 -- # version=25.1rc0 00:06:35.624 00:51:58 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:35.624 00:51:58 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:35.624 00:51:58 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:35.624 00:51:58 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:35.624 ************************************ 00:06:35.624 END TEST version 00:06:35.624 ************************************ 00:06:35.624 00:06:35.624 real 0m0.196s 00:06:35.624 user 0m0.134s 00:06:35.624 sys 0m0.085s 00:06:35.624 00:51:58 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.624 00:51:58 version -- common/autotest_common.sh@10 -- # set +x 00:06:35.624 00:51:58 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:35.624 00:51:58 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:35.624 00:51:58 -- spdk/autotest.sh@194 -- # uname -s 00:06:35.624 00:51:58 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:35.624 00:51:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:35.624 00:51:58 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:35.624 00:51:58 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:35.624 00:51:58 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:35.624 00:51:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:35.624 00:51:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.624 00:51:58 -- common/autotest_common.sh@10 -- # set +x 00:06:35.624 ************************************ 00:06:35.624 START TEST blockdev_nvme 00:06:35.624 ************************************ 00:06:35.624 00:51:58 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:35.624 * Looking for test storage... 00:06:35.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:35.624 00:51:58 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.624 00:51:58 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.624 00:51:58 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.624 00:51:58 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.624 00:51:58 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:35.887 00:51:58 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.887 00:51:58 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.887 00:51:58 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.887 00:51:58 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.887 --rc genhtml_branch_coverage=1 00:06:35.887 --rc genhtml_function_coverage=1 00:06:35.887 --rc genhtml_legend=1 00:06:35.887 --rc geninfo_all_blocks=1 00:06:35.887 --rc geninfo_unexecuted_blocks=1 00:06:35.887 00:06:35.887 ' 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.887 --rc genhtml_branch_coverage=1 00:06:35.887 --rc genhtml_function_coverage=1 00:06:35.887 --rc genhtml_legend=1 00:06:35.887 --rc geninfo_all_blocks=1 00:06:35.887 --rc geninfo_unexecuted_blocks=1 00:06:35.887 00:06:35.887 ' 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.887 --rc genhtml_branch_coverage=1 00:06:35.887 --rc genhtml_function_coverage=1 00:06:35.887 --rc genhtml_legend=1 00:06:35.887 --rc geninfo_all_blocks=1 00:06:35.887 --rc geninfo_unexecuted_blocks=1 00:06:35.887 00:06:35.887 ' 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.887 --rc genhtml_branch_coverage=1 00:06:35.887 --rc genhtml_function_coverage=1 00:06:35.887 --rc genhtml_legend=1 00:06:35.887 --rc geninfo_all_blocks=1 00:06:35.887 --rc geninfo_unexecuted_blocks=1 00:06:35.887 00:06:35.887 ' 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:35.887 00:51:58 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73367 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73367 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 73367 ']' 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.887 00:51:58 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.887 00:51:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.887 [2024-11-26 00:51:58.638291] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:35.887 [2024-11-26 00:51:58.639129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73367 ] 00:06:35.887 [2024-11-26 00:51:58.776282] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.149 [2024-11-26 00:51:58.803188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.149 [2024-11-26 00:51:58.834124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.796 00:51:59 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.796 00:51:59 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:36.796 00:51:59 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:36.796 00:51:59 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:36.796 00:51:59 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:36.796 00:51:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:36.796 00:51:59 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:36.796 00:51:59 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:36.796 00:51:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.796 00:51:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 00:51:59 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 00:51:59 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:37.060 00:51:59 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 00:51:59 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 00:51:59 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 00:51:59 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:37.060 00:51:59 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:37.060 00:51:59 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.060 00:51:59 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.060 00:51:59 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:37.061 00:51:59 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3025e5e5-1eb5-4678-9971-45bb9ab89612"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3025e5e5-1eb5-4678-9971-45bb9ab89612",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "8eccd930-394b-4e9d-9bc4-aff63b5da175"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8eccd930-394b-4e9d-9bc4-aff63b5da175",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e8d538d2-65fe-490c-9754-fdec2153d6f9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e8d538d2-65fe-490c-9754-fdec2153d6f9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c15ab1a2-45b8-4945-9d3b-621f8aab1a2c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c15ab1a2-45b8-4945-9d3b-621f8aab1a2c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "1b918cd7-e6f6-4f99-be1d-c462d09d04cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1b918cd7-e6f6-4f99-be1d-c462d09d04cd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "dbd4c36f-a54f-45fc-937f-e059dfbf4f34"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dbd4c36f-a54f-45fc-937f-e059dfbf4f34",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:37.061 00:51:59 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:37.321 00:51:59 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:37.321 00:51:59 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:37.321 00:51:59 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:37.321 00:51:59 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 73367 00:06:37.321 00:51:59 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 73367 ']' 00:06:37.321 00:51:59 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 73367 00:06:37.321 00:51:59 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:37.321 00:51:59 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.321 00:51:59 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73367 00:06:37.321 killing process with pid 73367 00:06:37.321 00:52:00 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.321 00:52:00 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.321 00:52:00 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73367' 00:06:37.321 00:52:00 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 73367 00:06:37.321 00:52:00 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 73367 00:06:37.582 00:52:00 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:37.582 00:52:00 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.582 00:52:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:37.582 00:52:00 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.582 00:52:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.582 ************************************ 00:06:37.582 START TEST bdev_hello_world 00:06:37.582 ************************************ 00:06:37.582 00:52:00 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.582 [2024-11-26 00:52:00.400174] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:37.582 [2024-11-26 00:52:00.400300] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73440 ] 00:06:37.843 [2024-11-26 00:52:00.532707] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.843 [2024-11-26 00:52:00.561567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.843 [2024-11-26 00:52:00.581298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.103 [2024-11-26 00:52:00.953204] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:38.103 [2024-11-26 00:52:00.953261] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:38.103 [2024-11-26 00:52:00.953284] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:38.103 [2024-11-26 00:52:00.955441] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:38.103 [2024-11-26 00:52:00.956529] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:38.103 [2024-11-26 00:52:00.956560] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:38.103 [2024-11-26 00:52:00.956962] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:38.103 00:06:38.103 [2024-11-26 00:52:00.956987] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:38.364 ************************************ 00:06:38.364 END TEST bdev_hello_world 00:06:38.364 ************************************ 00:06:38.364 00:06:38.364 real 0m0.768s 00:06:38.364 user 0m0.506s 00:06:38.364 sys 0m0.158s 00:06:38.364 00:52:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.364 00:52:01 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:38.364 00:52:01 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:38.364 00:52:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.364 00:52:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.364 00:52:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.364 ************************************ 00:06:38.364 START TEST bdev_bounds 00:06:38.364 ************************************ 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:38.364 Process bdevio pid: 73460 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73460 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73460' 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73460 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73460 ']' 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:38.364 00:52:01 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:38.364 [2024-11-26 00:52:01.242279] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:38.364 [2024-11-26 00:52:01.242583] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73460 ] 00:06:38.625 [2024-11-26 00:52:01.374664] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.625 [2024-11-26 00:52:01.399246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.625 [2024-11-26 00:52:01.420901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.625 [2024-11-26 00:52:01.421046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.625 [2024-11-26 00:52:01.421119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.196 00:52:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.196 00:52:02 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:39.196 00:52:02 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:39.458 I/O targets: 00:06:39.458 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:39.458 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:39.458 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.458 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.458 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.458 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:39.458 00:06:39.458 00:06:39.458 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.458 http://cunit.sourceforge.net/ 00:06:39.458 00:06:39.458 00:06:39.458 Suite: bdevio tests on: Nvme3n1 00:06:39.458 Test: blockdev write read block ...passed 00:06:39.458 Test: blockdev write zeroes read block ...passed 00:06:39.458 Test: blockdev write zeroes read no split ...passed 00:06:39.458 Test: blockdev write zeroes read split ...passed 00:06:39.458 Test: blockdev write zeroes read split partial ...passed 00:06:39.458 Test: blockdev reset ...[2024-11-26 00:52:02.254510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:39.458 [2024-11-26 00:52:02.256906] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:39.458 Test: blockdev write read 8 blocks ...uccessful. 00:06:39.458 passed 00:06:39.458 Test: blockdev write read size > 128k ...passed 00:06:39.458 Test: blockdev write read invalid size ...passed 00:06:39.458 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.458 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.458 Test: blockdev write read max offset ...passed 00:06:39.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.458 Test: blockdev writev readv 8 blocks ...passed 00:06:39.458 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.458 Test: blockdev writev readv block ...passed 00:06:39.458 Test: blockdev writev readv size > 128k ...passed 00:06:39.458 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.458 Test: blockdev comparev and writev ...[2024-11-26 00:52:02.273139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d0c06000 len:0x1000 00:06:39.458 [2024-11-26 00:52:02.273313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.458 passed 00:06:39.458 Test: blockdev nvme passthru rw ...passed 00:06:39.458 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.458 Test: blockdev nvme admin passthru ...[2024-11-26 00:52:02.276231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.458 [2024-11-26 00:52:02.276297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.458 passed 00:06:39.458 Test: blockdev copy ...passed 00:06:39.458 Suite: bdevio tests on: Nvme2n3 00:06:39.458 Test: blockdev write read block ...passed 00:06:39.458 Test: blockdev write zeroes read block ...passed 00:06:39.458 Test: blockdev write zeroes read no split ...passed 00:06:39.719 Test: blockdev write zeroes read split ...passed 00:06:39.719 Test: blockdev write zeroes read split partial ...passed 00:06:39.719 Test: blockdev reset ...[2024-11-26 00:52:02.381256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.719 [2024-11-26 00:52:02.383532] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.719 passed 00:06:39.719 Test: blockdev write read 8 blocks ...passed 00:06:39.719 Test: blockdev write read size > 128k ...passed 00:06:39.719 Test: blockdev write read invalid size ...passed 00:06:39.719 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.719 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.719 Test: blockdev write read max offset ...passed 00:06:39.719 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.719 Test: blockdev writev readv 8 blocks ...passed 00:06:39.719 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.719 Test: blockdev writev readv block ...passed 00:06:39.719 Test: blockdev writev readv size > 128k ...passed 00:06:39.719 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.719 Test: blockdev comparev and writev ...[2024-11-26 00:52:02.397211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x306205000 len:0x1000 00:06:39.719 [2024-11-26 00:52:02.397332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.719 passed 00:06:39.719 Test: blockdev nvme passthru rw ...passed 00:06:39.719 Test: blockdev nvme passthru vendor specific ...[2024-11-26 00:52:02.399673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:39.719 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:39.719 [2024-11-26 00:52:02.399925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.719 passed 00:06:39.719 Test: blockdev copy ...passed 00:06:39.719 Suite: bdevio tests on: Nvme2n2 00:06:39.719 Test: blockdev write read block ...passed 00:06:39.719 Test: blockdev write zeroes read block ...passed 00:06:39.719 Test: blockdev write zeroes read no split ...passed 00:06:39.719 Test: blockdev write zeroes read split ...passed 00:06:39.719 Test: blockdev write zeroes read split partial ...passed 00:06:39.719 Test: blockdev reset ...[2024-11-26 00:52:02.526028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.719 [2024-11-26 00:52:02.528479] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:39.720 00:06:39.720 Test: blockdev write read 8 blocks ...passed 00:06:39.720 Test: blockdev write read size > 128k ...passed 00:06:39.720 Test: blockdev write read invalid size ...passed 00:06:39.720 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.720 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.720 Test: blockdev write read max offset ...passed 00:06:39.720 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.720 Test: blockdev writev readv 8 blocks ...passed 00:06:39.720 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.720 Test: blockdev writev readv block ...passed 00:06:39.720 Test: blockdev writev readv size > 128k ...passed 00:06:39.720 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.720 Test: blockdev comparev and writev ...[2024-11-26 00:52:02.543826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e7036000 len:0x1000 00:06:39.720 [2024-11-26 00:52:02.543890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.720 passed 00:06:39.720 Test: blockdev nvme passthru rw ...passed 00:06:39.720 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.720 Test: blockdev nvme admin passthru ...[2024-11-26 00:52:02.546223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.720 [2024-11-26 00:52:02.546256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.720 passed 00:06:39.720 Test: blockdev copy ...passed 00:06:39.720 Suite: bdevio tests on: Nvme2n1 00:06:39.720 Test: blockdev write read block ...passed 00:06:39.981 Test: blockdev write zeroes read block ...passed 00:06:39.981 Test: blockdev write zeroes read no split ...passed 00:06:39.981 Test: blockdev write zeroes read split ...passed 00:06:39.981 Test: blockdev write zeroes read split partial ...passed 00:06:39.981 Test: blockdev reset ...[2024-11-26 00:52:02.659200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.981 [2024-11-26 00:52:02.662082] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:39.981 00:06:39.981 Test: blockdev write read 8 blocks ...passed 00:06:39.981 Test: blockdev write read size > 128k ...passed 00:06:39.981 Test: blockdev write read invalid size ...passed 00:06:39.981 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.981 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.981 Test: blockdev write read max offset ...passed 00:06:39.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.981 Test: blockdev writev readv 8 blocks ...passed 00:06:39.981 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.981 Test: blockdev writev readv block ...passed 00:06:39.981 Test: blockdev writev readv size > 128k ...passed 00:06:39.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.981 Test: blockdev comparev and writev ...[2024-11-26 00:52:02.676780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e7030000 len:0x1000 00:06:39.981 [2024-11-26 00:52:02.676821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.981 passed 00:06:39.981 Test: blockdev nvme passthru rw ...passed 00:06:39.981 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.982 Test: blockdev nvme admin passthru ...[2024-11-26 00:52:02.678826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.982 [2024-11-26 00:52:02.678863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.982 passed 00:06:39.982 Test: blockdev copy ...passed 00:06:39.982 Suite: bdevio tests on: Nvme1n1 00:06:39.982 Test: blockdev write read block ...passed 00:06:39.982 Test: blockdev write zeroes read block ...passed 00:06:39.982 Test: blockdev write zeroes read no split ...passed 00:06:39.982 Test: blockdev write zeroes read split ...passed 00:06:39.982 Test: blockdev write zeroes read split partial ...passed 00:06:39.982 Test: blockdev reset ...[2024-11-26 00:52:02.798275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:39.982 [2024-11-26 00:52:02.799897] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:39.982 Test: blockdev write read 8 blocks ...uccessful. 00:06:39.982 passed 00:06:39.982 Test: blockdev write read size > 128k ...passed 00:06:39.982 Test: blockdev write read invalid size ...passed 00:06:39.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.982 Test: blockdev write read max offset ...passed 00:06:39.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.982 Test: blockdev writev readv 8 blocks ...passed 00:06:39.982 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.982 Test: blockdev writev readv block ...passed 00:06:39.982 Test: blockdev writev readv size > 128k ...passed 00:06:39.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.982 Test: blockdev comparev and writev ...[2024-11-26 00:52:02.808672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:39.982 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2e702c000 len:0x1000 00:06:39.982 [2024-11-26 00:52:02.808790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.982 passed 00:06:39.982 Test: blockdev nvme passthru vendor specific ...[2024-11-26 00:52:02.810117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.982 [2024-11-26 00:52:02.810144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.982 passed 00:06:39.982 Test: blockdev nvme admin passthru ...passed 00:06:39.982 Test: blockdev copy ...passed 00:06:39.982 Suite: bdevio tests on: Nvme0n1 00:06:39.982 Test: blockdev write read block ...passed 00:06:40.244 Test: blockdev write zeroes read block ...passed 00:06:40.244 Test: blockdev write zeroes read no split ...passed 00:06:40.244 Test: blockdev write zeroes read split ...passed 00:06:40.244 Test: blockdev write zeroes read split partial ...passed 00:06:40.244 Test: blockdev reset ...[2024-11-26 00:52:02.960596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:40.244 passed 00:06:40.244 Test: blockdev write read 8 blocks ...[2024-11-26 00:52:02.962400] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:40.244 passed 00:06:40.244 Test: blockdev write read size > 128k ...passed 00:06:40.244 Test: blockdev write read invalid size ...passed 00:06:40.244 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.244 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.244 Test: blockdev write read max offset ...passed 00:06:40.244 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.244 Test: blockdev writev readv 8 blocks ...passed 00:06:40.244 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.244 Test: blockdev writev readv block ...passed 00:06:40.244 Test: blockdev writev readv size > 128k ...passed 00:06:40.244 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.244 Test: blockdev comparev and writev ...passed 00:06:40.244 Test: blockdev nvme passthru rw ...[2024-11-26 00:52:02.974665] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:40.244 separate metadata which is not supported yet. 00:06:40.244 passed 00:06:40.244 Test: blockdev nvme passthru vendor specific ...passed 00:06:40.244 Test: blockdev nvme admin passthru ...[2024-11-26 00:52:02.976137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:40.244 [2024-11-26 00:52:02.976244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:40.244 passed 00:06:40.244 Test: blockdev copy ...passed 00:06:40.244 00:06:40.244 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.244 suites 6 6 n/a 0 0 00:06:40.244 tests 138 138 138 0 0 00:06:40.244 asserts 893 893 893 0 n/a 00:06:40.244 00:06:40.244 Elapsed time = 1.678 seconds 00:06:40.244 0 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73460 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73460 ']' 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73460 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73460 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73460' 00:06:40.244 killing process with pid 73460 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73460 00:06:40.244 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73460 00:06:40.505 00:52:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:40.505 00:06:40.505 real 0m2.013s 00:06:40.505 user 0m4.795s 00:06:40.505 sys 0m0.291s 00:06:40.505 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.505 00:52:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:40.505 ************************************ 00:06:40.505 END TEST bdev_bounds 00:06:40.505 ************************************ 00:06:40.505 00:52:03 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:40.505 00:52:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:40.505 00:52:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.505 00:52:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.505 ************************************ 00:06:40.505 START TEST bdev_nbd 00:06:40.505 ************************************ 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73514 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73514 /var/tmp/spdk-nbd.sock 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73514 ']' 00:06:40.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.505 00:52:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:40.505 [2024-11-26 00:52:03.325367] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:40.505 [2024-11-26 00:52:03.325599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.767 [2024-11-26 00:52:03.460763] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.767 [2024-11-26 00:52:03.483451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.767 [2024-11-26 00:52:03.503647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:41.334 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.592 1+0 records in 00:06:41.592 1+0 records out 00:06:41.592 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638568 s, 6.4 MB/s 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:41.592 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.851 1+0 records in 00:06:41.851 1+0 records out 00:06:41.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354882 s, 11.5 MB/s 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:41.851 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.109 1+0 records in 00:06:42.109 1+0 records out 00:06:42.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313269 s, 13.1 MB/s 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.109 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:42.110 00:52:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.368 1+0 records in 00:06:42.368 1+0 records out 00:06:42.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447832 s, 9.1 MB/s 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:42.368 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.626 1+0 records in 00:06:42.626 1+0 records out 00:06:42.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530259 s, 7.7 MB/s 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.626 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:42.627 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.885 1+0 records in 00:06:42.885 1+0 records out 00:06:42.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408013 s, 10.0 MB/s 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:42.885 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd0", 00:06:43.144 "bdev_name": "Nvme0n1" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd1", 00:06:43.144 "bdev_name": "Nvme1n1" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd2", 00:06:43.144 "bdev_name": "Nvme2n1" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd3", 00:06:43.144 "bdev_name": "Nvme2n2" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd4", 00:06:43.144 "bdev_name": "Nvme2n3" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd5", 00:06:43.144 "bdev_name": "Nvme3n1" 00:06:43.144 } 00:06:43.144 ]' 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd0", 00:06:43.144 "bdev_name": "Nvme0n1" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd1", 00:06:43.144 "bdev_name": "Nvme1n1" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd2", 00:06:43.144 "bdev_name": "Nvme2n1" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd3", 00:06:43.144 "bdev_name": "Nvme2n2" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd4", 00:06:43.144 "bdev_name": "Nvme2n3" 00:06:43.144 }, 00:06:43.144 { 00:06:43.144 "nbd_device": "/dev/nbd5", 00:06:43.144 "bdev_name": "Nvme3n1" 00:06:43.144 } 00:06:43.144 ]' 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.144 00:52:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.144 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.402 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.660 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.918 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.176 00:52:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.435 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:44.694 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:44.952 /dev/nbd0 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:44.952 1+0 records in 00:06:44.952 1+0 records out 00:06:44.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417315 s, 9.8 MB/s 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:44.952 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.953 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:44.953 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.953 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:44.953 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:44.953 /dev/nbd1 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.211 1+0 records in 00:06:45.211 1+0 records out 00:06:45.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351891 s, 11.6 MB/s 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:45.211 00:52:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:45.211 /dev/nbd10 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.211 1+0 records in 00:06:45.211 1+0 records out 00:06:45.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421346 s, 9.7 MB/s 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:45.211 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:45.470 /dev/nbd11 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.470 1+0 records in 00:06:45.470 1+0 records out 00:06:45.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426426 s, 9.6 MB/s 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:45.470 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:45.728 /dev/nbd12 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.728 1+0 records in 00:06:45.728 1+0 records out 00:06:45.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000342875 s, 11.9 MB/s 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.728 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:45.729 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:45.989 /dev/nbd13 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.989 1+0 records in 00:06:45.989 1+0 records out 00:06:45.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000380136 s, 10.8 MB/s 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.989 00:52:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd0", 00:06:46.247 "bdev_name": "Nvme0n1" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd1", 00:06:46.247 "bdev_name": "Nvme1n1" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd10", 00:06:46.247 "bdev_name": "Nvme2n1" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd11", 00:06:46.247 "bdev_name": "Nvme2n2" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd12", 00:06:46.247 "bdev_name": "Nvme2n3" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd13", 00:06:46.247 "bdev_name": "Nvme3n1" 00:06:46.247 } 00:06:46.247 ]' 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd0", 00:06:46.247 "bdev_name": "Nvme0n1" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd1", 00:06:46.247 "bdev_name": "Nvme1n1" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd10", 00:06:46.247 "bdev_name": "Nvme2n1" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd11", 00:06:46.247 "bdev_name": "Nvme2n2" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd12", 00:06:46.247 "bdev_name": "Nvme2n3" 00:06:46.247 }, 00:06:46.247 { 00:06:46.247 "nbd_device": "/dev/nbd13", 00:06:46.247 "bdev_name": "Nvme3n1" 00:06:46.247 } 00:06:46.247 ]' 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.247 /dev/nbd1 00:06:46.247 /dev/nbd10 00:06:46.247 /dev/nbd11 00:06:46.247 /dev/nbd12 00:06:46.247 /dev/nbd13' 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.247 /dev/nbd1 00:06:46.247 /dev/nbd10 00:06:46.247 /dev/nbd11 00:06:46.247 /dev/nbd12 00:06:46.247 /dev/nbd13' 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:46.247 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:46.248 256+0 records in 00:06:46.248 256+0 records out 00:06:46.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063513 s, 165 MB/s 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.248 256+0 records in 00:06:46.248 256+0 records out 00:06:46.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0506529 s, 20.7 MB/s 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.248 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.505 256+0 records in 00:06:46.505 256+0 records out 00:06:46.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0485915 s, 21.6 MB/s 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:46.505 256+0 records in 00:06:46.505 256+0 records out 00:06:46.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0467474 s, 22.4 MB/s 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:46.505 256+0 records in 00:06:46.505 256+0 records out 00:06:46.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0474683 s, 22.1 MB/s 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:46.505 256+0 records in 00:06:46.505 256+0 records out 00:06:46.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0478565 s, 21.9 MB/s 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:46.505 256+0 records in 00:06:46.505 256+0 records out 00:06:46.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0474796 s, 22.1 MB/s 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.505 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.506 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.764 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.021 00:52:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.279 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.537 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.794 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.795 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.795 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:48.053 00:52:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:48.312 malloc_lvol_verify 00:06:48.312 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:48.570 2c3d4418-0cbf-482d-bcb3-6087d631b090 00:06:48.570 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:48.828 dbb3457e-a0bf-438c-ac1a-25ecf82f343f 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:48.828 /dev/nbd0 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:48.828 mke2fs 1.47.0 (5-Feb-2023) 00:06:48.828 Discarding device blocks: 0/4096 done 00:06:48.828 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:48.828 00:06:48.828 Allocating group tables: 0/1 done 00:06:48.828 Writing inode tables: 0/1 done 00:06:48.828 Creating journal (1024 blocks): done 00:06:48.828 Writing superblocks and filesystem accounting information: 0/1 done 00:06:48.828 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.828 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73514 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73514 ']' 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73514 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73514 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.087 killing process with pid 73514 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73514' 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73514 00:06:49.087 00:52:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73514 00:06:49.347 00:52:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:49.347 00:06:49.347 real 0m8.862s 00:06:49.347 user 0m13.172s 00:06:49.347 sys 0m2.934s 00:06:49.347 ************************************ 00:06:49.347 END TEST bdev_nbd 00:06:49.347 00:52:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.347 00:52:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:49.347 ************************************ 00:06:49.347 skipping fio tests on NVMe due to multi-ns failures. 00:06:49.347 00:52:12 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:49.347 00:52:12 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:49.347 00:52:12 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:49.347 00:52:12 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:49.347 00:52:12 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:49.347 00:52:12 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:49.347 00:52:12 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.347 00:52:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.347 ************************************ 00:06:49.347 START TEST bdev_verify 00:06:49.347 ************************************ 00:06:49.347 00:52:12 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:49.347 [2024-11-26 00:52:12.237391] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:49.347 [2024-11-26 00:52:12.237506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73884 ] 00:06:49.606 [2024-11-26 00:52:12.369988] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:49.606 [2024-11-26 00:52:12.396745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.606 [2024-11-26 00:52:12.416279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.606 [2024-11-26 00:52:12.416377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.172 Running I/O for 5 seconds... 00:06:52.053 25664.00 IOPS, 100.25 MiB/s [2024-11-26T00:52:16.359Z] 23200.00 IOPS, 90.62 MiB/s [2024-11-26T00:52:17.302Z] 22464.00 IOPS, 87.75 MiB/s [2024-11-26T00:52:18.246Z] 22304.00 IOPS, 87.12 MiB/s [2024-11-26T00:52:18.246Z] 21888.00 IOPS, 85.50 MiB/s 00:06:55.329 Latency(us) 00:06:55.329 [2024-11-26T00:52:18.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.330 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x0 length 0xbd0bd 00:06:55.330 Nvme0n1 : 5.03 1856.02 7.25 0.00 0.00 68755.11 10485.76 63721.16 00:06:55.330 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:55.330 Nvme0n1 : 5.08 1762.70 6.89 0.00 0.00 71822.22 7360.20 64124.46 00:06:55.330 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x0 length 0xa0000 00:06:55.330 Nvme1n1 : 5.04 1855.51 7.25 0.00 0.00 68685.53 12653.49 58478.28 00:06:55.330 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0xa0000 length 0xa0000 00:06:55.330 Nvme1n1 : 5.05 1749.65 6.83 0.00 0.00 72913.01 13107.20 64124.46 00:06:55.330 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x0 length 0x80000 00:06:55.330 Nvme2n1 : 5.05 1861.19 7.27 0.00 0.00 68355.90 5646.18 58478.28 00:06:55.330 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x80000 length 0x80000 00:06:55.330 Nvme2n1 : 5.05 1749.18 6.83 0.00 0.00 72754.24 15325.34 64527.75 00:06:55.330 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x0 length 0x80000 00:06:55.330 Nvme2n2 : 5.06 1860.68 7.27 0.00 0.00 68264.43 4915.20 60898.07 00:06:55.330 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x80000 length 0x80000 00:06:55.330 Nvme2n2 : 5.05 1748.68 6.83 0.00 0.00 72632.52 15930.29 61704.66 00:06:55.330 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x0 length 0x80000 00:06:55.330 Nvme2n3 : 5.07 1867.95 7.30 0.00 0.00 68006.79 12250.19 63317.86 00:06:55.330 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x80000 length 0x80000 00:06:55.330 Nvme2n3 : 5.07 1755.30 6.86 0.00 0.00 72223.28 7662.67 60091.47 00:06:55.330 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x0 length 0x20000 00:06:55.330 Nvme3n1 : 5.07 1867.46 7.29 0.00 0.00 67914.70 5545.35 64527.75 00:06:55.330 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:55.330 Verification LBA range: start 0x20000 length 0x20000 00:06:55.330 Nvme3n1 : 5.08 1763.18 6.89 0.00 0.00 71870.98 10737.82 61301.37 00:06:55.330 [2024-11-26T00:52:18.247Z] =================================================================================================================== 00:06:55.330 [2024-11-26T00:52:18.247Z] Total : 21697.49 84.76 0.00 0.00 70290.00 4915.20 64527.75 00:06:55.902 00:06:55.902 real 0m6.390s 00:06:55.902 user 0m12.088s 00:06:55.902 sys 0m0.194s 00:06:55.902 00:52:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.902 ************************************ 00:06:55.902 END TEST bdev_verify 00:06:55.902 ************************************ 00:06:55.902 00:52:18 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:55.902 00:52:18 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:55.902 00:52:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:55.902 00:52:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.902 00:52:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:55.902 ************************************ 00:06:55.902 START TEST bdev_verify_big_io 00:06:55.902 ************************************ 00:06:55.902 00:52:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:55.902 [2024-11-26 00:52:18.686524] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:06:55.902 [2024-11-26 00:52:18.686640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73971 ] 00:06:56.163 [2024-11-26 00:52:18.818874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.163 [2024-11-26 00:52:18.848775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.163 [2024-11-26 00:52:18.870423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.163 [2024-11-26 00:52:18.870497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.425 Running I/O for 5 seconds... 00:07:01.650 740.00 IOPS, 46.25 MiB/s [2024-11-26T00:52:25.511Z] 2345.50 IOPS, 146.59 MiB/s [2024-11-26T00:52:25.511Z] 2767.67 IOPS, 172.98 MiB/s 00:07:02.594 Latency(us) 00:07:02.594 [2024-11-26T00:52:25.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.594 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:02.594 Verification LBA range: start 0x0 length 0xbd0b 00:07:02.594 Nvme0n1 : 5.55 155.91 9.74 0.00 0.00 780719.23 18350.08 1071160.71 00:07:02.594 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:02.594 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:02.594 Nvme0n1 : 5.78 88.64 5.54 0.00 0.00 1389090.26 10132.87 1271196.75 00:07:02.594 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:02.594 Verification LBA range: start 0x0 length 0xa000 00:07:02.594 Nvme1n1 : 5.56 161.05 10.07 0.00 0.00 746354.50 61704.66 903388.55 00:07:02.594 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:02.594 Verification LBA range: start 0xa000 length 0xa000 00:07:02.594 Nvme1n1 : 5.78 86.88 5.43 0.00 0.00 1367750.58 154060.01 1245385.65 00:07:02.594 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:02.594 Verification LBA range: start 0x0 length 0x8000 00:07:02.594 Nvme2n1 : 5.66 162.31 10.14 0.00 0.00 714781.06 95178.44 738842.78 00:07:02.594 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:02.594 Verification LBA range: start 0x8000 length 0x8000 00:07:02.594 Nvme2n1 : 5.89 83.06 5.19 0.00 0.00 1392721.55 104051.00 2606921.26 00:07:02.594 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:02.594 Verification LBA range: start 0x0 length 0x8000 00:07:02.594 Nvme2n2 : 5.80 172.30 10.77 0.00 0.00 658073.48 58074.98 645277.54 00:07:02.594 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:02.594 Verification LBA range: start 0x8000 length 0x8000 00:07:02.594 Nvme2n2 : 5.98 89.17 5.57 0.00 0.00 1268273.78 70980.53 2645637.91 00:07:02.594 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:02.594 Verification LBA range: start 0x0 length 0x8000 00:07:02.594 Nvme2n3 : 5.83 179.65 11.23 0.00 0.00 610374.33 33070.47 709805.29 00:07:02.594 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:02.595 Verification LBA range: start 0x8000 length 0x8000 00:07:02.595 Nvme2n3 : 5.98 94.92 5.93 0.00 0.00 1156978.61 13611.32 2697260.11 00:07:02.595 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:02.595 Verification LBA range: start 0x0 length 0x2000 00:07:02.595 Nvme3n1 : 5.97 209.45 13.09 0.00 0.00 507877.54 119.73 738842.78 00:07:02.595 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:02.595 Verification LBA range: start 0x2000 length 0x2000 00:07:02.595 Nvme3n1 : 5.99 104.36 6.52 0.00 0.00 1015161.54 1216.20 2774693.42 00:07:02.595 [2024-11-26T00:52:25.512Z] =================================================================================================================== 00:07:02.595 [2024-11-26T00:52:25.512Z] Total : 1587.70 99.23 0.00 0.00 866777.77 119.73 2774693.42 00:07:03.163 00:07:03.163 real 0m7.372s 00:07:03.163 user 0m13.987s 00:07:03.163 sys 0m0.243s 00:07:03.163 00:52:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.163 ************************************ 00:07:03.163 END TEST bdev_verify_big_io 00:07:03.163 ************************************ 00:07:03.163 00:52:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:03.163 00:52:26 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.163 00:52:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:03.163 00:52:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.163 00:52:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.163 ************************************ 00:07:03.163 START TEST bdev_write_zeroes 00:07:03.163 ************************************ 00:07:03.163 00:52:26 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:03.423 [2024-11-26 00:52:26.113136] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:03.423 [2024-11-26 00:52:26.113255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74077 ] 00:07:03.423 [2024-11-26 00:52:26.246526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.423 [2024-11-26 00:52:26.284972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.423 [2024-11-26 00:52:26.318000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.993 Running I/O for 1 seconds... 00:07:05.501 35505.00 IOPS, 138.69 MiB/s 00:07:05.501 Latency(us) 00:07:05.501 [2024-11-26T00:52:28.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.501 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:05.501 Nvme0n1 : 1.41 4221.04 16.49 0.00 0.00 28874.94 5696.59 790464.98 00:07:05.501 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:05.501 Nvme1n1 : 1.33 4560.45 17.81 0.00 0.00 27540.03 8721.33 651730.31 00:07:05.501 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:05.501 Nvme2n1 : 1.33 4555.83 17.80 0.00 0.00 27422.04 8721.33 651730.31 00:07:05.501 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:05.501 Nvme2n2 : 1.34 4551.87 17.78 0.00 0.00 27372.39 8670.92 651730.31 00:07:05.501 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:05.501 Nvme2n3 : 1.34 4545.63 17.76 0.00 0.00 27832.71 7965.14 632371.99 00:07:05.501 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:05.501 Nvme3n1 : 1.34 4493.76 17.55 0.00 0.00 28133.92 8670.92 632371.99 00:07:05.501 [2024-11-26T00:52:28.418Z] =================================================================================================================== 00:07:05.501 [2024-11-26T00:52:28.418Z] Total : 26928.58 105.19 0.00 0.00 27858.20 5696.59 790464.98 00:07:05.501 00:07:05.501 real 0m2.309s 00:07:05.501 user 0m1.969s 00:07:05.501 sys 0m0.224s 00:07:05.501 00:52:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.501 ************************************ 00:07:05.502 END TEST bdev_write_zeroes 00:07:05.502 ************************************ 00:07:05.502 00:52:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:05.502 00:52:28 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.502 00:52:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:05.502 00:52:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.502 00:52:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.763 ************************************ 00:07:05.763 START TEST bdev_json_nonenclosed 00:07:05.763 ************************************ 00:07:05.763 00:52:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:05.763 [2024-11-26 00:52:28.493240] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:05.763 [2024-11-26 00:52:28.493360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74121 ] 00:07:05.763 [2024-11-26 00:52:28.627728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.763 [2024-11-26 00:52:28.657816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.025 [2024-11-26 00:52:28.689749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.025 [2024-11-26 00:52:28.689880] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:06.025 [2024-11-26 00:52:28.689905] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:06.025 [2024-11-26 00:52:28.689919] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.025 00:07:06.025 real 0m0.349s 00:07:06.025 user 0m0.147s 00:07:06.025 sys 0m0.097s 00:07:06.025 00:52:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.025 ************************************ 00:07:06.025 END TEST bdev_json_nonenclosed 00:07:06.025 ************************************ 00:07:06.025 00:52:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:06.025 00:52:28 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:06.025 00:52:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:06.025 00:52:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.025 00:52:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.025 ************************************ 00:07:06.025 START TEST bdev_json_nonarray 00:07:06.025 ************************************ 00:07:06.025 00:52:28 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:06.025 [2024-11-26 00:52:28.909670] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:06.025 [2024-11-26 00:52:28.909783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74141 ] 00:07:06.288 [2024-11-26 00:52:29.041345] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.288 [2024-11-26 00:52:29.072055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.288 [2024-11-26 00:52:29.091262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.288 [2024-11-26 00:52:29.091349] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:06.288 [2024-11-26 00:52:29.091366] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:06.288 [2024-11-26 00:52:29.091374] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:06.288 00:07:06.288 real 0m0.309s 00:07:06.288 user 0m0.121s 00:07:06.288 sys 0m0.085s 00:07:06.288 00:52:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.288 ************************************ 00:07:06.288 END TEST bdev_json_nonarray 00:07:06.288 ************************************ 00:07:06.288 00:52:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:06.288 00:52:29 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:06.288 00:52:29 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:06.288 00:52:29 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:06.549 00:52:29 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:06.549 00:52:29 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:06.549 00:52:29 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:06.549 00:52:29 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:06.549 00:52:29 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:06.549 00:52:29 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:06.549 00:52:29 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:06.549 00:52:29 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:06.549 00:07:06.549 real 0m30.814s 00:07:06.549 user 0m48.797s 00:07:06.549 sys 0m5.049s 00:07:06.549 ************************************ 00:07:06.549 END TEST blockdev_nvme 00:07:06.549 00:52:29 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.549 00:52:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.549 ************************************ 00:07:06.549 00:52:29 -- spdk/autotest.sh@209 -- # uname -s 00:07:06.549 00:52:29 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:06.549 00:52:29 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:06.549 00:52:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.549 00:52:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.549 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:07:06.549 ************************************ 00:07:06.549 START TEST blockdev_nvme_gpt 00:07:06.549 ************************************ 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:06.549 * Looking for test storage... 00:07:06.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.549 00:52:29 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.549 --rc genhtml_branch_coverage=1 00:07:06.549 --rc genhtml_function_coverage=1 00:07:06.549 --rc genhtml_legend=1 00:07:06.549 --rc geninfo_all_blocks=1 00:07:06.549 --rc geninfo_unexecuted_blocks=1 00:07:06.549 00:07:06.549 ' 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.549 --rc genhtml_branch_coverage=1 00:07:06.549 --rc genhtml_function_coverage=1 00:07:06.549 --rc genhtml_legend=1 00:07:06.549 --rc geninfo_all_blocks=1 00:07:06.549 --rc geninfo_unexecuted_blocks=1 00:07:06.549 00:07:06.549 ' 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.549 --rc genhtml_branch_coverage=1 00:07:06.549 --rc genhtml_function_coverage=1 00:07:06.549 --rc genhtml_legend=1 00:07:06.549 --rc geninfo_all_blocks=1 00:07:06.549 --rc geninfo_unexecuted_blocks=1 00:07:06.549 00:07:06.549 ' 00:07:06.549 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.549 --rc genhtml_branch_coverage=1 00:07:06.549 --rc genhtml_function_coverage=1 00:07:06.549 --rc genhtml_legend=1 00:07:06.550 --rc geninfo_all_blocks=1 00:07:06.550 --rc geninfo_unexecuted_blocks=1 00:07:06.550 00:07:06.550 ' 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74225 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74225 00:07:06.550 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 74225 ']' 00:07:06.550 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.550 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.550 00:52:29 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:06.550 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.550 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.550 00:52:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:06.811 [2024-11-26 00:52:29.493068] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:06.811 [2024-11-26 00:52:29.493186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74225 ] 00:07:06.811 [2024-11-26 00:52:29.624636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.811 [2024-11-26 00:52:29.655065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.811 [2024-11-26 00:52:29.675330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.753 00:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.753 00:52:30 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:07.754 00:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:07.754 00:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:07.754 00:52:30 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:07.754 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:08.016 Waiting for block devices as requested 00:07:08.016 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:08.277 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:08.277 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:08.277 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:13.562 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:13.562 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:13.562 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:13.563 BYT; 00:07:13.563 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:13.563 BYT; 00:07:13.563 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:13.563 00:52:36 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:13.563 00:52:36 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:14.492 The operation has completed successfully. 00:07:14.492 00:52:37 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:15.423 The operation has completed successfully. 00:07:15.424 00:52:38 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:15.989 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:16.556 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.556 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.556 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.556 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:16.556 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:16.556 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.556 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.556 [] 00:07:16.556 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.556 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:16.556 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:16.557 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:16.557 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:16.557 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:16.557 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.557 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.818 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.818 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:16.818 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.818 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.818 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.818 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:16.818 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:16.818 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.818 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.080 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.080 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:17.080 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b6ee6180-a65d-4fb6-af27-f2414206beaf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b6ee6180-a65d-4fb6-af27-f2414206beaf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "9ad21a77-7134-45aa-968d-4ce7a8d5a74e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9ad21a77-7134-45aa-968d-4ce7a8d5a74e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "dbe9d29e-35f4-4066-ad4e-0e301d17bca1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dbe9d29e-35f4-4066-ad4e-0e301d17bca1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "afd1018d-941d-4635-a21a-a19e82d30797"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "afd1018d-941d-4635-a21a-a19e82d30797",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4754fb65-c159-4b2b-a7d8-1651608d34e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4754fb65-c159-4b2b-a7d8-1651608d34e5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:17.080 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:17.080 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:17.080 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:17.081 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:17.081 00:52:39 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 74225 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 74225 ']' 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 74225 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74225 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.081 killing process with pid 74225 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74225' 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 74225 00:07:17.081 00:52:39 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 74225 00:07:17.358 00:52:40 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:17.358 00:52:40 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:17.358 00:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:17.358 00:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.358 00:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:17.358 ************************************ 00:07:17.358 START TEST bdev_hello_world 00:07:17.358 ************************************ 00:07:17.358 00:52:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:17.359 [2024-11-26 00:52:40.185531] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:17.359 [2024-11-26 00:52:40.185646] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74827 ] 00:07:17.628 [2024-11-26 00:52:40.320024] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.628 [2024-11-26 00:52:40.349637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.628 [2024-11-26 00:52:40.369206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.888 [2024-11-26 00:52:40.739579] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:17.889 [2024-11-26 00:52:40.739639] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:17.889 [2024-11-26 00:52:40.739662] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:17.889 [2024-11-26 00:52:40.741740] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:17.889 [2024-11-26 00:52:40.742752] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:17.889 [2024-11-26 00:52:40.742781] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:17.889 [2024-11-26 00:52:40.743342] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:17.889 00:07:17.889 [2024-11-26 00:52:40.743368] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:18.148 00:07:18.148 real 0m0.774s 00:07:18.148 user 0m0.505s 00:07:18.148 sys 0m0.165s 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:18.148 ************************************ 00:07:18.148 END TEST bdev_hello_world 00:07:18.148 ************************************ 00:07:18.148 00:52:40 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:18.148 00:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.148 00:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.148 00:52:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.148 ************************************ 00:07:18.148 START TEST bdev_bounds 00:07:18.148 ************************************ 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74858 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.148 Process bdevio pid: 74858 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74858' 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74858 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74858 ']' 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.148 00:52:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:18.148 [2024-11-26 00:52:41.013992] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:18.148 [2024-11-26 00:52:41.014109] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74858 ] 00:07:18.407 [2024-11-26 00:52:41.147749] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:18.407 [2024-11-26 00:52:41.177409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.407 [2024-11-26 00:52:41.198875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.407 [2024-11-26 00:52:41.199063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.407 [2024-11-26 00:52:41.199142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.978 00:52:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.978 00:52:41 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:18.978 00:52:41 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:19.239 I/O targets: 00:07:19.239 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:19.239 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:19.239 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:19.239 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:19.239 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:19.239 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:19.239 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:19.239 00:07:19.239 00:07:19.239 CUnit - A unit testing framework for C - Version 2.1-3 00:07:19.239 http://cunit.sourceforge.net/ 00:07:19.239 00:07:19.239 00:07:19.239 Suite: bdevio tests on: Nvme3n1 00:07:19.239 Test: blockdev write read block ...passed 00:07:19.239 Test: blockdev write zeroes read block ...passed 00:07:19.239 Test: blockdev write zeroes read no split ...passed 00:07:19.239 Test: blockdev write zeroes read split ...passed 00:07:19.239 Test: blockdev write zeroes read split partial ...passed 00:07:19.239 Test: blockdev reset ...[2024-11-26 00:52:41.968230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:19.239 [2024-11-26 00:52:41.970370] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:19.239 passed 00:07:19.239 Test: blockdev write read 8 blocks ...passed 00:07:19.239 Test: blockdev write read size > 128k ...passed 00:07:19.239 Test: blockdev write read invalid size ...passed 00:07:19.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.239 Test: blockdev write read max offset ...passed 00:07:19.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.239 Test: blockdev writev readv 8 blocks ...passed 00:07:19.239 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.239 Test: blockdev writev readv block ...passed 00:07:19.239 Test: blockdev writev readv size > 128k ...passed 00:07:19.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.239 Test: blockdev comparev and writev ...[2024-11-26 00:52:41.987252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c980e000 len:0x1000 00:07:19.239 [2024-11-26 00:52:41.987298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.239 passed 00:07:19.239 Test: blockdev nvme passthru rw ...passed 00:07:19.239 Test: blockdev nvme passthru vendor specific ...[2024-11-26 00:52:41.988324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:19.239 passed 00:07:19.239 Test: blockdev nvme admin passthru ...[2024-11-26 00:52:41.988366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:19.239 passed 00:07:19.239 Test: blockdev copy ...passed 00:07:19.239 Suite: bdevio tests on: Nvme2n3 00:07:19.239 Test: blockdev write read block ...passed 00:07:19.239 Test: blockdev write zeroes read block ...passed 00:07:19.239 Test: blockdev write zeroes read no split ...passed 00:07:19.239 Test: blockdev write zeroes read split ...passed 00:07:19.239 Test: blockdev write zeroes read split partial ...passed 00:07:19.239 Test: blockdev reset ...[2024-11-26 00:52:42.003274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:19.239 [2024-11-26 00:52:42.006878] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:19.239 passed 00:07:19.239 Test: blockdev write read 8 blocks ...passed 00:07:19.239 Test: blockdev write read size > 128k ...passed 00:07:19.239 Test: blockdev write read invalid size ...passed 00:07:19.239 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.239 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.239 Test: blockdev write read max offset ...passed 00:07:19.239 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.239 Test: blockdev writev readv 8 blocks ...passed 00:07:19.239 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.239 Test: blockdev writev readv block ...passed 00:07:19.239 Test: blockdev writev readv size > 128k ...passed 00:07:19.239 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.240 Test: blockdev comparev and writev ...[2024-11-26 00:52:42.018565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c980a000 len:0x1000 00:07:19.240 [2024-11-26 00:52:42.018606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.240 passed 00:07:19.240 Test: blockdev nvme passthru rw ...passed 00:07:19.240 Test: blockdev nvme passthru vendor specific ...[2024-11-26 00:52:42.019856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:19.240 [2024-11-26 00:52:42.019880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:19.240 passed 00:07:19.240 Test: blockdev nvme admin passthru ...passed 00:07:19.240 Test: blockdev copy ...passed 00:07:19.240 Suite: bdevio tests on: Nvme2n2 00:07:19.240 Test: blockdev write read block ...passed 00:07:19.240 Test: blockdev write zeroes read block ...passed 00:07:19.240 Test: blockdev write zeroes read no split ...passed 00:07:19.240 Test: blockdev write zeroes read split ...passed 00:07:19.240 Test: blockdev write zeroes read split partial ...passed 00:07:19.240 Test: blockdev reset ...[2024-11-26 00:52:42.041347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:19.240 [2024-11-26 00:52:42.043695] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:19.240 passed 00:07:19.240 Test: blockdev write read 8 blocks ...passed 00:07:19.240 Test: blockdev write read size > 128k ...passed 00:07:19.240 Test: blockdev write read invalid size ...passed 00:07:19.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.240 Test: blockdev write read max offset ...passed 00:07:19.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.240 Test: blockdev writev readv 8 blocks ...passed 00:07:19.240 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.240 Test: blockdev writev readv block ...passed 00:07:19.240 Test: blockdev writev readv size > 128k ...passed 00:07:19.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.240 Test: blockdev comparev and writev ...[2024-11-26 00:52:42.057695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b3405000 len:0x1000 00:07:19.240 [2024-11-26 00:52:42.057735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.240 passed 00:07:19.240 Test: blockdev nvme passthru rw ...passed 00:07:19.240 Test: blockdev nvme passthru vendor specific ...[2024-11-26 00:52:42.060639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:19.240 [2024-11-26 00:52:42.060671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:19.240 passed 00:07:19.240 Test: blockdev nvme admin passthru ...passed 00:07:19.240 Test: blockdev copy ...passed 00:07:19.240 Suite: bdevio tests on: Nvme2n1 00:07:19.240 Test: blockdev write read block ...passed 00:07:19.240 Test: blockdev write zeroes read block ...passed 00:07:19.240 Test: blockdev write zeroes read no split ...passed 00:07:19.240 Test: blockdev write zeroes read split ...passed 00:07:19.240 Test: blockdev write zeroes read split partial ...passed 00:07:19.240 Test: blockdev reset ...[2024-11-26 00:52:42.079245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:19.240 [2024-11-26 00:52:42.081034] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:19.240 passed 00:07:19.240 Test: blockdev write read 8 blocks ...passed 00:07:19.240 Test: blockdev write read size > 128k ...passed 00:07:19.240 Test: blockdev write read invalid size ...passed 00:07:19.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.240 Test: blockdev write read max offset ...passed 00:07:19.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.240 Test: blockdev writev readv 8 blocks ...passed 00:07:19.240 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.240 Test: blockdev writev readv block ...passed 00:07:19.240 Test: blockdev writev readv size > 128k ...passed 00:07:19.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.240 Test: blockdev comparev and writev ...[2024-11-26 00:52:42.092802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9c02000 len:0x1000 00:07:19.240 [2024-11-26 00:52:42.092837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.240 passed 00:07:19.240 Test: blockdev nvme passthru rw ...passed 00:07:19.240 Test: blockdev nvme passthru vendor specific ...[2024-11-26 00:52:42.094978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:19.240 [2024-11-26 00:52:42.095007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:19.240 passed 00:07:19.240 Test: blockdev nvme admin passthru ...passed 00:07:19.240 Test: blockdev copy ...passed 00:07:19.240 Suite: bdevio tests on: Nvme1n1p2 00:07:19.240 Test: blockdev write read block ...passed 00:07:19.240 Test: blockdev write zeroes read block ...passed 00:07:19.240 Test: blockdev write zeroes read no split ...passed 00:07:19.240 Test: blockdev write zeroes read split ...passed 00:07:19.240 Test: blockdev write zeroes read split partial ...passed 00:07:19.240 Test: blockdev reset ...[2024-11-26 00:52:42.120058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:19.240 [2024-11-26 00:52:42.121499] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:19.240 passed 00:07:19.240 Test: blockdev write read 8 blocks ...passed 00:07:19.240 Test: blockdev write read size > 128k ...passed 00:07:19.240 Test: blockdev write read invalid size ...passed 00:07:19.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.240 Test: blockdev write read max offset ...passed 00:07:19.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.240 Test: blockdev writev readv 8 blocks ...passed 00:07:19.240 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.240 Test: blockdev writev readv block ...passed 00:07:19.240 Test: blockdev writev readv size > 128k ...passed 00:07:19.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.240 Test: blockdev comparev and writev ...[2024-11-26 00:52:42.136134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e543b000 len:0x1000 00:07:19.240 [2024-11-26 00:52:42.136172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.240 passed 00:07:19.240 Test: blockdev nvme passthru rw ...passed 00:07:19.240 Test: blockdev nvme passthru vendor specific ...passed 00:07:19.240 Test: blockdev nvme admin passthru ...passed 00:07:19.240 Test: blockdev copy ...passed 00:07:19.240 Suite: bdevio tests on: Nvme1n1p1 00:07:19.240 Test: blockdev write read block ...passed 00:07:19.240 Test: blockdev write zeroes read block ...passed 00:07:19.240 Test: blockdev write zeroes read no split ...passed 00:07:19.240 Test: blockdev write zeroes read split ...passed 00:07:19.502 Test: blockdev write zeroes read split partial ...passed 00:07:19.502 Test: blockdev reset ...[2024-11-26 00:52:42.156059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:19.502 [2024-11-26 00:52:42.158626] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:19.502 passed 00:07:19.502 Test: blockdev write read 8 blocks ...passed 00:07:19.502 Test: blockdev write read size > 128k ...passed 00:07:19.502 Test: blockdev write read invalid size ...passed 00:07:19.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.502 Test: blockdev write read max offset ...passed 00:07:19.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.502 Test: blockdev writev readv 8 blocks ...passed 00:07:19.502 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.502 Test: blockdev writev readv block ...passed 00:07:19.502 Test: blockdev writev readv size > 128k ...passed 00:07:19.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.502 Test: blockdev comparev and writev ...[2024-11-26 00:52:42.174066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e5437000 len:0x1000 00:07:19.502 [2024-11-26 00:52:42.174113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:19.502 passed 00:07:19.502 Test: blockdev nvme passthru rw ...passed 00:07:19.502 Test: blockdev nvme passthru vendor specific ...passed 00:07:19.502 Test: blockdev nvme admin passthru ...passed 00:07:19.502 Test: blockdev copy ...passed 00:07:19.502 Suite: bdevio tests on: Nvme0n1 00:07:19.502 Test: blockdev write read block ...passed 00:07:19.502 Test: blockdev write zeroes read block ...passed 00:07:19.502 Test: blockdev write zeroes read no split ...passed 00:07:19.502 Test: blockdev write zeroes read split ...passed 00:07:19.502 Test: blockdev write zeroes read split partial ...passed 00:07:19.502 Test: blockdev reset ...[2024-11-26 00:52:42.195249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:19.502 [2024-11-26 00:52:42.197913] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:19.502 passed 00:07:19.502 Test: blockdev write read 8 blocks ...passed 00:07:19.502 Test: blockdev write read size > 128k ...passed 00:07:19.502 Test: blockdev write read invalid size ...passed 00:07:19.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:19.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:19.502 Test: blockdev write read max offset ...passed 00:07:19.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:19.502 Test: blockdev writev readv 8 blocks ...passed 00:07:19.502 Test: blockdev writev readv 30 x 1block ...passed 00:07:19.502 Test: blockdev writev readv block ...passed 00:07:19.502 Test: blockdev writev readv size > 128k ...passed 00:07:19.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:19.502 Test: blockdev comparev and writev ...[2024-11-26 00:52:42.213127] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:19.502 separate metadata which is not supported yet. 00:07:19.502 passed 00:07:19.502 Test: blockdev nvme passthru rw ...passed 00:07:19.502 Test: blockdev nvme passthru vendor specific ...[2024-11-26 00:52:42.214106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:19.502 [2024-11-26 00:52:42.214149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:19.502 passed 00:07:19.502 Test: blockdev nvme admin passthru ...passed 00:07:19.502 Test: blockdev copy ...passed 00:07:19.502 00:07:19.502 Run Summary: Type Total Ran Passed Failed Inactive 00:07:19.502 suites 7 7 n/a 0 0 00:07:19.502 tests 161 161 161 0 0 00:07:19.502 asserts 1025 1025 1025 0 n/a 00:07:19.502 00:07:19.502 Elapsed time = 0.589 seconds 00:07:19.502 0 00:07:19.502 00:52:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74858 00:07:19.502 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74858 ']' 00:07:19.502 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74858 00:07:19.502 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:19.502 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.503 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74858 00:07:19.503 killing process with pid 74858 00:07:19.503 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.503 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.503 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74858' 00:07:19.503 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74858 00:07:19.503 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74858 00:07:19.503 00:52:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:19.503 00:07:19.503 real 0m1.446s 00:07:19.503 user 0m3.649s 00:07:19.503 sys 0m0.244s 00:07:19.503 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.503 00:52:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:19.503 ************************************ 00:07:19.503 END TEST bdev_bounds 00:07:19.503 ************************************ 00:07:19.764 00:52:42 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:19.764 00:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:19.764 00:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.764 00:52:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:19.764 ************************************ 00:07:19.764 START TEST bdev_nbd 00:07:19.764 ************************************ 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74911 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74911 /var/tmp/spdk-nbd.sock 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74911 ']' 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:19.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.764 00:52:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:19.764 [2024-11-26 00:52:42.531123] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:19.764 [2024-11-26 00:52:42.531235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.764 [2024-11-26 00:52:42.663822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.025 [2024-11-26 00:52:42.695624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.025 [2024-11-26 00:52:42.714715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.597 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.858 1+0 records in 00:07:20.858 1+0 records out 00:07:20.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098962 s, 4.1 MB/s 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:20.858 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.859 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:20.859 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:20.859 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.859 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.859 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.119 1+0 records in 00:07:21.119 1+0 records out 00:07:21.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000960686 s, 4.3 MB/s 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.119 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:21.120 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.120 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:21.120 00:52:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:21.120 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.120 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:21.120 00:52:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.382 1+0 records in 00:07:21.382 1+0 records out 00:07:21.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00091049 s, 4.5 MB/s 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:21.382 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.643 1+0 records in 00:07:21.643 1+0 records out 00:07:21.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000984962 s, 4.2 MB/s 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:21.643 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.904 1+0 records in 00:07:21.904 1+0 records out 00:07:21.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073626 s, 5.6 MB/s 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.904 1+0 records in 00:07:21.904 1+0 records out 00:07:21.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797434 s, 5.1 MB/s 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:21.904 00:52:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:22.165 1+0 records in 00:07:22.165 1+0 records out 00:07:22.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106593 s, 3.8 MB/s 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:22.165 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd0", 00:07:22.427 "bdev_name": "Nvme0n1" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd1", 00:07:22.427 "bdev_name": "Nvme1n1p1" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd2", 00:07:22.427 "bdev_name": "Nvme1n1p2" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd3", 00:07:22.427 "bdev_name": "Nvme2n1" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd4", 00:07:22.427 "bdev_name": "Nvme2n2" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd5", 00:07:22.427 "bdev_name": "Nvme2n3" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd6", 00:07:22.427 "bdev_name": "Nvme3n1" 00:07:22.427 } 00:07:22.427 ]' 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd0", 00:07:22.427 "bdev_name": "Nvme0n1" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd1", 00:07:22.427 "bdev_name": "Nvme1n1p1" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd2", 00:07:22.427 "bdev_name": "Nvme1n1p2" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd3", 00:07:22.427 "bdev_name": "Nvme2n1" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd4", 00:07:22.427 "bdev_name": "Nvme2n2" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd5", 00:07:22.427 "bdev_name": "Nvme2n3" 00:07:22.427 }, 00:07:22.427 { 00:07:22.427 "nbd_device": "/dev/nbd6", 00:07:22.427 "bdev_name": "Nvme3n1" 00:07:22.427 } 00:07:22.427 ]' 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.427 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.688 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.949 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.210 00:52:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.471 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.730 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.990 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.250 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:24.251 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.251 00:52:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.251 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:24.512 /dev/nbd0 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.512 1+0 records in 00:07:24.512 1+0 records out 00:07:24.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103353 s, 4.0 MB/s 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.512 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:24.774 /dev/nbd1 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.774 1+0 records in 00:07:24.774 1+0 records out 00:07:24.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102294 s, 4.0 MB/s 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.774 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:24.774 /dev/nbd10 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.034 1+0 records in 00:07:25.034 1+0 records out 00:07:25.034 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00156617 s, 2.6 MB/s 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:25.034 /dev/nbd11 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.034 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.296 1+0 records in 00:07:25.296 1+0 records out 00:07:25.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519376 s, 7.9 MB/s 00:07:25.296 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.296 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:25.296 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.296 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.296 00:52:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:25.296 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.296 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:25.296 00:52:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:25.296 /dev/nbd12 00:07:25.296 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:25.296 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:25.296 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:25.296 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:25.296 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.296 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.297 1+0 records in 00:07:25.297 1+0 records out 00:07:25.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880542 s, 4.7 MB/s 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:25.297 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:25.557 /dev/nbd13 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.557 1+0 records in 00:07:25.557 1+0 records out 00:07:25.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000790481 s, 5.2 MB/s 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:25.557 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:25.815 /dev/nbd14 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.815 1+0 records in 00:07:25.815 1+0 records out 00:07:25.815 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00126417 s, 3.2 MB/s 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.815 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.076 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:26.076 { 00:07:26.076 "nbd_device": "/dev/nbd0", 00:07:26.076 "bdev_name": "Nvme0n1" 00:07:26.076 }, 00:07:26.076 { 00:07:26.076 "nbd_device": "/dev/nbd1", 00:07:26.076 "bdev_name": "Nvme1n1p1" 00:07:26.076 }, 00:07:26.076 { 00:07:26.076 "nbd_device": "/dev/nbd10", 00:07:26.076 "bdev_name": "Nvme1n1p2" 00:07:26.076 }, 00:07:26.076 { 00:07:26.076 "nbd_device": "/dev/nbd11", 00:07:26.076 "bdev_name": "Nvme2n1" 00:07:26.076 }, 00:07:26.076 { 00:07:26.076 "nbd_device": "/dev/nbd12", 00:07:26.076 "bdev_name": "Nvme2n2" 00:07:26.076 }, 00:07:26.076 { 00:07:26.076 "nbd_device": "/dev/nbd13", 00:07:26.076 "bdev_name": "Nvme2n3" 00:07:26.076 }, 00:07:26.076 { 00:07:26.076 "nbd_device": "/dev/nbd14", 00:07:26.076 "bdev_name": "Nvme3n1" 00:07:26.076 } 00:07:26.076 ]' 00:07:26.076 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:26.076 { 00:07:26.076 "nbd_device": "/dev/nbd0", 00:07:26.076 "bdev_name": "Nvme0n1" 00:07:26.076 }, 00:07:26.076 { 00:07:26.076 "nbd_device": "/dev/nbd1", 00:07:26.076 "bdev_name": "Nvme1n1p1" 00:07:26.076 }, 00:07:26.076 { 00:07:26.077 "nbd_device": "/dev/nbd10", 00:07:26.077 "bdev_name": "Nvme1n1p2" 00:07:26.077 }, 00:07:26.077 { 00:07:26.077 "nbd_device": "/dev/nbd11", 00:07:26.077 "bdev_name": "Nvme2n1" 00:07:26.077 }, 00:07:26.077 { 00:07:26.077 "nbd_device": "/dev/nbd12", 00:07:26.077 "bdev_name": "Nvme2n2" 00:07:26.077 }, 00:07:26.077 { 00:07:26.077 "nbd_device": "/dev/nbd13", 00:07:26.077 "bdev_name": "Nvme2n3" 00:07:26.077 }, 00:07:26.077 { 00:07:26.077 "nbd_device": "/dev/nbd14", 00:07:26.077 "bdev_name": "Nvme3n1" 00:07:26.077 } 00:07:26.077 ]' 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:26.077 /dev/nbd1 00:07:26.077 /dev/nbd10 00:07:26.077 /dev/nbd11 00:07:26.077 /dev/nbd12 00:07:26.077 /dev/nbd13 00:07:26.077 /dev/nbd14' 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:26.077 /dev/nbd1 00:07:26.077 /dev/nbd10 00:07:26.077 /dev/nbd11 00:07:26.077 /dev/nbd12 00:07:26.077 /dev/nbd13 00:07:26.077 /dev/nbd14' 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:26.077 256+0 records in 00:07:26.077 256+0 records out 00:07:26.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043794 s, 239 MB/s 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.077 00:52:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:26.338 256+0 records in 00:07:26.338 256+0 records out 00:07:26.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.236988 s, 4.4 MB/s 00:07:26.338 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.338 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:26.599 256+0 records in 00:07:26.599 256+0 records out 00:07:26.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193778 s, 5.4 MB/s 00:07:26.599 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.599 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:26.599 256+0 records in 00:07:26.599 256+0 records out 00:07:26.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0887019 s, 11.8 MB/s 00:07:26.599 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.599 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:26.860 256+0 records in 00:07:26.860 256+0 records out 00:07:26.860 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.193451 s, 5.4 MB/s 00:07:26.860 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.860 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:27.121 256+0 records in 00:07:27.121 256+0 records out 00:07:27.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243718 s, 4.3 MB/s 00:07:27.121 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.121 00:52:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:27.383 256+0 records in 00:07:27.383 256+0 records out 00:07:27.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.198825 s, 5.3 MB/s 00:07:27.383 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:27.383 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:27.644 256+0 records in 00:07:27.644 256+0 records out 00:07:27.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.208692 s, 5.0 MB/s 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.644 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.908 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:28.170 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:28.170 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:28.170 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:28.170 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.170 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.170 00:52:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:28.170 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.170 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.170 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.170 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.432 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.694 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.956 00:52:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:29.228 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:29.510 malloc_lvol_verify 00:07:29.510 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:29.771 533ee12b-4fd5-4679-ad38-830b508d77b6 00:07:29.771 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:30.032 817ebae6-f5f0-4725-8622-d95fe5acdbfb 00:07:30.032 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:30.032 /dev/nbd0 00:07:30.032 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:30.032 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:30.032 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:30.033 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:30.033 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:30.033 mke2fs 1.47.0 (5-Feb-2023) 00:07:30.033 Discarding device blocks: 0/4096 done 00:07:30.033 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:30.033 00:07:30.033 Allocating group tables: 0/1 done 00:07:30.033 Writing inode tables: 0/1 done 00:07:30.033 Creating journal (1024 blocks): done 00:07:30.292 Writing superblocks and filesystem accounting information: 0/1 done 00:07:30.292 00:07:30.293 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:30.293 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.293 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:30.293 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.293 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:30.293 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.293 00:52:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74911 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74911 ']' 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74911 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74911 00:07:30.293 killing process with pid 74911 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74911' 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74911 00:07:30.293 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74911 00:07:30.554 ************************************ 00:07:30.554 END TEST bdev_nbd 00:07:30.554 ************************************ 00:07:30.554 00:52:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:30.554 00:07:30.554 real 0m10.906s 00:07:30.554 user 0m15.264s 00:07:30.554 sys 0m3.733s 00:07:30.554 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.554 00:52:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:30.554 00:52:53 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:30.554 00:52:53 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:30.554 skipping fio tests on NVMe due to multi-ns failures. 00:07:30.554 00:52:53 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:30.554 00:52:53 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:30.554 00:52:53 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:30.554 00:52:53 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:30.554 00:52:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:30.554 00:52:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.554 00:52:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.554 ************************************ 00:07:30.554 START TEST bdev_verify 00:07:30.554 ************************************ 00:07:30.554 00:52:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:30.815 [2024-11-26 00:52:53.499985] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:30.815 [2024-11-26 00:52:53.500090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75317 ] 00:07:30.815 [2024-11-26 00:52:53.633571] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.815 [2024-11-26 00:52:53.662461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.815 [2024-11-26 00:52:53.683794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.815 [2024-11-26 00:52:53.683911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.387 Running I/O for 5 seconds... 00:07:33.713 17517.00 IOPS, 68.43 MiB/s [2024-11-26T00:52:57.573Z] 18954.00 IOPS, 74.04 MiB/s [2024-11-26T00:52:58.511Z] 19901.00 IOPS, 77.74 MiB/s [2024-11-26T00:52:59.445Z] 20021.50 IOPS, 78.21 MiB/s [2024-11-26T00:52:59.445Z] 20109.20 IOPS, 78.55 MiB/s 00:07:36.528 Latency(us) 00:07:36.528 [2024-11-26T00:52:59.445Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.528 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.528 Verification LBA range: start 0x0 length 0xbd0bd 00:07:36.528 Nvme0n1 : 5.08 1411.74 5.51 0.00 0.00 90129.13 14216.27 99211.42 00:07:36.528 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:36.529 Nvme0n1 : 5.09 1421.50 5.55 0.00 0.00 89824.90 17039.36 89532.26 00:07:36.529 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x0 length 0x4ff80 00:07:36.529 Nvme1n1p1 : 5.09 1415.03 5.53 0.00 0.00 89901.98 5343.70 89128.96 00:07:36.529 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:36.529 Nvme1n1p1 : 5.09 1420.46 5.55 0.00 0.00 89682.31 11695.66 80659.69 00:07:36.529 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x0 length 0x4ff7f 00:07:36.529 Nvme1n1p2 : 5.10 1416.13 5.53 0.00 0.00 89649.84 7007.31 81869.59 00:07:36.529 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:36.529 Nvme1n1p2 : 5.09 1419.02 5.54 0.00 0.00 89583.02 9981.64 79449.80 00:07:36.529 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x0 length 0x80000 00:07:36.529 Nvme2n1 : 5.10 1412.01 5.52 0.00 0.00 89576.80 6402.36 80659.69 00:07:36.529 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x80000 length 0x80000 00:07:36.529 Nvme2n1 : 5.10 1418.30 5.54 0.00 0.00 89437.90 9477.51 80256.39 00:07:36.529 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x0 length 0x80000 00:07:36.529 Nvme2n2 : 5.10 1411.40 5.51 0.00 0.00 89401.31 5973.86 81062.99 00:07:36.529 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x80000 length 0x80000 00:07:36.529 Nvme2n2 : 5.10 1417.06 5.54 0.00 0.00 89306.21 8166.79 83482.78 00:07:36.529 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x0 length 0x80000 00:07:36.529 Nvme2n3 : 5.11 1421.95 5.55 0.00 0.00 88696.41 14317.10 70173.93 00:07:36.529 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x80000 length 0x80000 00:07:36.529 Nvme2n3 : 5.10 1417.67 5.54 0.00 0.00 89124.39 14216.27 97598.23 00:07:36.529 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x0 length 0x20000 00:07:36.529 Nvme3n1 : 5.11 1421.02 5.55 0.00 0.00 88554.46 14317.10 74206.92 00:07:36.529 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:36.529 Verification LBA range: start 0x20000 length 0x20000 00:07:36.529 Nvme3n1 : 5.10 1416.41 5.53 0.00 0.00 88972.14 14216.27 98404.82 00:07:36.529 [2024-11-26T00:52:59.446Z] =================================================================================================================== 00:07:36.529 [2024-11-26T00:52:59.446Z] Total : 19839.71 77.50 0.00 0.00 89416.12 5343.70 99211.42 00:07:37.100 00:07:37.101 real 0m6.371s 00:07:37.101 user 0m11.994s 00:07:37.101 sys 0m0.230s 00:07:37.101 00:52:59 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.101 ************************************ 00:07:37.101 END TEST bdev_verify 00:07:37.101 ************************************ 00:07:37.101 00:52:59 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:37.101 00:52:59 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:37.101 00:52:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:37.101 00:52:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.101 00:52:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.101 ************************************ 00:07:37.101 START TEST bdev_verify_big_io 00:07:37.101 ************************************ 00:07:37.101 00:52:59 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:37.101 [2024-11-26 00:52:59.935290] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:37.101 [2024-11-26 00:52:59.935398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75410 ] 00:07:37.363 [2024-11-26 00:53:00.067936] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.363 [2024-11-26 00:53:00.097705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.363 [2024-11-26 00:53:00.119127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.363 [2024-11-26 00:53:00.119229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.935 Running I/O for 5 seconds... 00:07:43.767 832.00 IOPS, 52.00 MiB/s [2024-11-26T00:53:06.942Z] 2338.00 IOPS, 146.12 MiB/s [2024-11-26T00:53:06.942Z] 2655.67 IOPS, 165.98 MiB/s 00:07:44.025 Latency(us) 00:07:44.025 [2024-11-26T00:53:06.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.025 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.025 Verification LBA range: start 0x0 length 0xbd0b 00:07:44.025 Nvme0n1 : 5.92 97.23 6.08 0.00 0.00 1265435.74 21475.64 1393799.48 00:07:44.025 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.025 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:44.025 Nvme0n1 : 5.93 102.46 6.40 0.00 0.00 1131027.98 79449.80 1032444.06 00:07:44.025 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.025 Verification LBA range: start 0x0 length 0x4ff8 00:07:44.025 Nvme1n1p1 : 6.11 100.43 6.28 0.00 0.00 1178358.63 60091.47 1180857.90 00:07:44.025 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.025 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:44.025 Nvme1n1p1 : 6.01 106.42 6.65 0.00 0.00 1054126.71 75416.81 1064707.94 00:07:44.025 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.025 Verification LBA range: start 0x0 length 0x4ff7 00:07:44.025 Nvme1n1p2 : 6.12 100.90 6.31 0.00 0.00 1133967.10 62914.56 1025991.29 00:07:44.025 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.025 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:44.025 Nvme1n1p2 : 6.12 108.29 6.77 0.00 0.00 993278.49 77836.60 1096971.82 00:07:44.025 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.025 Verification LBA range: start 0x0 length 0x8000 00:07:44.026 Nvme2n1 : 6.12 100.36 6.27 0.00 0.00 1096710.39 62511.26 1025991.29 00:07:44.026 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.026 Verification LBA range: start 0x8000 length 0x8000 00:07:44.026 Nvme2n1 : 6.22 119.07 7.44 0.00 0.00 882604.40 35490.26 1122782.92 00:07:44.026 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.026 Verification LBA range: start 0x0 length 0x8000 00:07:44.026 Nvme2n2 : 6.12 104.58 6.54 0.00 0.00 1030242.30 123409.33 1025991.29 00:07:44.026 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.026 Verification LBA range: start 0x8000 length 0x8000 00:07:44.026 Nvme2n2 : 6.23 85.23 5.33 0.00 0.00 1192817.98 2192.94 2529487.95 00:07:44.026 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.026 Verification LBA range: start 0x0 length 0x8000 00:07:44.026 Nvme2n3 : 6.19 113.82 7.11 0.00 0.00 921221.19 31053.98 1032444.06 00:07:44.026 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.026 Verification LBA range: start 0x8000 length 0x8000 00:07:44.026 Nvme2n3 : 5.85 98.48 6.16 0.00 0.00 1244118.86 26416.05 1393799.48 00:07:44.026 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:44.026 Verification LBA range: start 0x0 length 0x2000 00:07:44.026 Nvme3n1 : 6.23 119.98 7.50 0.00 0.00 845458.95 3806.13 2000360.37 00:07:44.026 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:44.026 Verification LBA range: start 0x2000 length 0x2000 00:07:44.026 Nvme3n1 : 5.93 102.34 6.40 0.00 0.00 1172282.99 79853.10 1180857.90 00:07:44.026 [2024-11-26T00:53:06.943Z] =================================================================================================================== 00:07:44.026 [2024-11-26T00:53:06.943Z] Total : 1459.58 91.22 0.00 0.00 1070185.08 2192.94 2529487.95 00:07:44.958 00:07:44.958 real 0m7.799s 00:07:44.958 user 0m14.858s 00:07:44.958 sys 0m0.225s 00:07:44.958 00:53:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.958 00:53:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:44.958 ************************************ 00:07:44.958 END TEST bdev_verify_big_io 00:07:44.958 ************************************ 00:07:44.958 00:53:07 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.958 00:53:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:44.958 00:53:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.958 00:53:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.958 ************************************ 00:07:44.958 START TEST bdev_write_zeroes 00:07:44.958 ************************************ 00:07:44.958 00:53:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:44.958 [2024-11-26 00:53:07.773565] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:44.958 [2024-11-26 00:53:07.773682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75513 ] 00:07:45.216 [2024-11-26 00:53:07.904522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:45.216 [2024-11-26 00:53:07.936532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.216 [2024-11-26 00:53:07.956093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.474 Running I/O for 1 seconds... 00:07:46.845 68096.00 IOPS, 266.00 MiB/s 00:07:46.845 Latency(us) 00:07:46.845 [2024-11-26T00:53:09.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.845 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.845 Nvme0n1 : 1.02 9683.76 37.83 0.00 0.00 13189.98 11191.53 24903.68 00:07:46.845 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.845 Nvme1n1p1 : 1.03 9671.29 37.78 0.00 0.00 13185.87 11141.12 24399.56 00:07:46.845 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.845 Nvme1n1p2 : 1.03 9659.55 37.73 0.00 0.00 13176.85 11191.53 23693.78 00:07:46.845 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.845 Nvme2n1 : 1.03 9648.62 37.69 0.00 0.00 13153.12 11241.94 22887.19 00:07:46.845 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.845 Nvme2n2 : 1.03 9637.26 37.65 0.00 0.00 13121.38 10889.06 22383.06 00:07:46.845 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.845 Nvme2n3 : 1.03 9625.89 37.60 0.00 0.00 13100.92 8267.62 23391.31 00:07:46.845 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:46.845 Nvme3n1 : 1.03 9614.55 37.56 0.00 0.00 13092.83 7965.14 25004.50 00:07:46.845 [2024-11-26T00:53:09.762Z] =================================================================================================================== 00:07:46.845 [2024-11-26T00:53:09.762Z] Total : 67540.93 263.83 0.00 0.00 13145.85 7965.14 25004.50 00:07:46.845 00:07:46.845 real 0m1.841s 00:07:46.845 user 0m1.568s 00:07:46.845 sys 0m0.163s 00:07:46.845 00:53:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.845 ************************************ 00:07:46.845 END TEST bdev_write_zeroes 00:07:46.845 00:53:09 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:46.845 ************************************ 00:07:46.845 00:53:09 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.845 00:53:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:46.845 00:53:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.845 00:53:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.845 ************************************ 00:07:46.845 START TEST bdev_json_nonenclosed 00:07:46.845 ************************************ 00:07:46.845 00:53:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.845 [2024-11-26 00:53:09.657099] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:46.845 [2024-11-26 00:53:09.657209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75550 ] 00:07:47.107 [2024-11-26 00:53:09.788576] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.107 [2024-11-26 00:53:09.820073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.107 [2024-11-26 00:53:09.839682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.107 [2024-11-26 00:53:09.839760] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:47.107 [2024-11-26 00:53:09.839779] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:47.107 [2024-11-26 00:53:09.839791] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.107 00:07:47.107 real 0m0.303s 00:07:47.107 user 0m0.127s 00:07:47.107 sys 0m0.073s 00:07:47.107 00:53:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.107 00:53:09 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:47.107 ************************************ 00:07:47.107 END TEST bdev_json_nonenclosed 00:07:47.107 ************************************ 00:07:47.107 00:53:09 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:47.107 00:53:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:47.107 00:53:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.107 00:53:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.107 ************************************ 00:07:47.107 START TEST bdev_json_nonarray 00:07:47.107 ************************************ 00:07:47.107 00:53:09 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:47.107 [2024-11-26 00:53:10.013675] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:47.107 [2024-11-26 00:53:10.013791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75581 ] 00:07:47.369 [2024-11-26 00:53:10.145409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.369 [2024-11-26 00:53:10.172032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.369 [2024-11-26 00:53:10.191956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.369 [2024-11-26 00:53:10.192047] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:47.369 [2024-11-26 00:53:10.192068] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:47.369 [2024-11-26 00:53:10.192078] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.369 00:07:47.369 real 0m0.306s 00:07:47.369 user 0m0.111s 00:07:47.369 sys 0m0.092s 00:07:47.369 ************************************ 00:07:47.369 END TEST bdev_json_nonarray 00:07:47.369 ************************************ 00:07:47.369 00:53:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.369 00:53:10 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:47.631 00:53:10 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:47.631 00:53:10 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:47.631 00:53:10 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:47.631 00:53:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.631 00:53:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.631 00:53:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.631 ************************************ 00:07:47.631 START TEST bdev_gpt_uuid 00:07:47.631 ************************************ 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75601 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75601 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 75601 ']' 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:47.631 00:53:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:47.631 [2024-11-26 00:53:10.399115] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:47.631 [2024-11-26 00:53:10.399724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75601 ] 00:07:47.631 [2024-11-26 00:53:10.532435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.892 [2024-11-26 00:53:10.564072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.892 [2024-11-26 00:53:10.583375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.464 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.464 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:48.464 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:48.464 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.464 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.726 Some configs were skipped because the RPC state that can call them passed over. 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:48.726 { 00:07:48.726 "name": "Nvme1n1p1", 00:07:48.726 "aliases": [ 00:07:48.726 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:48.726 ], 00:07:48.726 "product_name": "GPT Disk", 00:07:48.726 "block_size": 4096, 00:07:48.726 "num_blocks": 655104, 00:07:48.726 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:48.726 "assigned_rate_limits": { 00:07:48.726 "rw_ios_per_sec": 0, 00:07:48.726 "rw_mbytes_per_sec": 0, 00:07:48.726 "r_mbytes_per_sec": 0, 00:07:48.726 "w_mbytes_per_sec": 0 00:07:48.726 }, 00:07:48.726 "claimed": false, 00:07:48.726 "zoned": false, 00:07:48.726 "supported_io_types": { 00:07:48.726 "read": true, 00:07:48.726 "write": true, 00:07:48.726 "unmap": true, 00:07:48.726 "flush": true, 00:07:48.726 "reset": true, 00:07:48.726 "nvme_admin": false, 00:07:48.726 "nvme_io": false, 00:07:48.726 "nvme_io_md": false, 00:07:48.726 "write_zeroes": true, 00:07:48.726 "zcopy": false, 00:07:48.726 "get_zone_info": false, 00:07:48.726 "zone_management": false, 00:07:48.726 "zone_append": false, 00:07:48.726 "compare": true, 00:07:48.726 "compare_and_write": false, 00:07:48.726 "abort": true, 00:07:48.726 "seek_hole": false, 00:07:48.726 "seek_data": false, 00:07:48.726 "copy": true, 00:07:48.726 "nvme_iov_md": false 00:07:48.726 }, 00:07:48.726 "driver_specific": { 00:07:48.726 "gpt": { 00:07:48.726 "base_bdev": "Nvme1n1", 00:07:48.726 "offset_blocks": 256, 00:07:48.726 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:48.726 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:48.726 "partition_name": "SPDK_TEST_first" 00:07:48.726 } 00:07:48.726 } 00:07:48.726 } 00:07:48.726 ]' 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:48.726 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:48.988 { 00:07:48.988 "name": "Nvme1n1p2", 00:07:48.988 "aliases": [ 00:07:48.988 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:48.988 ], 00:07:48.988 "product_name": "GPT Disk", 00:07:48.988 "block_size": 4096, 00:07:48.988 "num_blocks": 655103, 00:07:48.988 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:48.988 "assigned_rate_limits": { 00:07:48.988 "rw_ios_per_sec": 0, 00:07:48.988 "rw_mbytes_per_sec": 0, 00:07:48.988 "r_mbytes_per_sec": 0, 00:07:48.988 "w_mbytes_per_sec": 0 00:07:48.988 }, 00:07:48.988 "claimed": false, 00:07:48.988 "zoned": false, 00:07:48.988 "supported_io_types": { 00:07:48.988 "read": true, 00:07:48.988 "write": true, 00:07:48.988 "unmap": true, 00:07:48.988 "flush": true, 00:07:48.988 "reset": true, 00:07:48.988 "nvme_admin": false, 00:07:48.988 "nvme_io": false, 00:07:48.988 "nvme_io_md": false, 00:07:48.988 "write_zeroes": true, 00:07:48.988 "zcopy": false, 00:07:48.988 "get_zone_info": false, 00:07:48.988 "zone_management": false, 00:07:48.988 "zone_append": false, 00:07:48.988 "compare": true, 00:07:48.988 "compare_and_write": false, 00:07:48.988 "abort": true, 00:07:48.988 "seek_hole": false, 00:07:48.988 "seek_data": false, 00:07:48.988 "copy": true, 00:07:48.988 "nvme_iov_md": false 00:07:48.988 }, 00:07:48.988 "driver_specific": { 00:07:48.988 "gpt": { 00:07:48.988 "base_bdev": "Nvme1n1", 00:07:48.988 "offset_blocks": 655360, 00:07:48.988 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:48.988 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:48.988 "partition_name": "SPDK_TEST_second" 00:07:48.988 } 00:07:48.988 } 00:07:48.988 } 00:07:48.988 ]' 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 75601 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 75601 ']' 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 75601 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75601 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.988 killing process with pid 75601 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75601' 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 75601 00:07:48.988 00:53:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 75601 00:07:49.247 00:07:49.247 real 0m1.743s 00:07:49.247 user 0m1.927s 00:07:49.247 sys 0m0.322s 00:07:49.247 00:53:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.247 00:53:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:49.247 ************************************ 00:07:49.247 END TEST bdev_gpt_uuid 00:07:49.247 ************************************ 00:07:49.247 00:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:49.247 00:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:49.247 00:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:49.247 00:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:49.247 00:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:49.247 00:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:49.247 00:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:49.247 00:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:49.247 00:53:12 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:49.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:49.819 Waiting for block devices as requested 00:07:49.819 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:49.819 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.081 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:50.081 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:55.389 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:55.390 00:53:17 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:55.390 00:53:17 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:55.390 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:55.390 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:55.390 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:55.390 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:55.390 00:53:18 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:55.390 00:07:55.390 real 0m49.005s 00:07:55.390 user 1m1.875s 00:07:55.390 sys 0m7.778s 00:07:55.390 00:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.390 ************************************ 00:07:55.390 END TEST blockdev_nvme_gpt 00:07:55.390 ************************************ 00:07:55.390 00:53:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.652 00:53:18 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:55.652 00:53:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.652 00:53:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.652 00:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:55.652 ************************************ 00:07:55.652 START TEST nvme 00:07:55.652 ************************************ 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:55.652 * Looking for test storage... 00:07:55.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:55.652 00:53:18 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:55.652 00:53:18 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:55.652 00:53:18 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:55.652 00:53:18 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:55.652 00:53:18 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:55.652 00:53:18 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:55.652 00:53:18 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:55.652 00:53:18 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:55.652 00:53:18 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:55.652 00:53:18 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:55.652 00:53:18 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:55.652 00:53:18 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:55.652 00:53:18 nvme -- scripts/common.sh@345 -- # : 1 00:07:55.652 00:53:18 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:55.652 00:53:18 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:55.652 00:53:18 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:55.652 00:53:18 nvme -- scripts/common.sh@353 -- # local d=1 00:07:55.652 00:53:18 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:55.652 00:53:18 nvme -- scripts/common.sh@355 -- # echo 1 00:07:55.652 00:53:18 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:55.652 00:53:18 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:55.652 00:53:18 nvme -- scripts/common.sh@353 -- # local d=2 00:07:55.652 00:53:18 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:55.652 00:53:18 nvme -- scripts/common.sh@355 -- # echo 2 00:07:55.652 00:53:18 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:55.652 00:53:18 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:55.652 00:53:18 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:55.652 00:53:18 nvme -- scripts/common.sh@368 -- # return 0 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:55.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.652 --rc genhtml_branch_coverage=1 00:07:55.652 --rc genhtml_function_coverage=1 00:07:55.652 --rc genhtml_legend=1 00:07:55.652 --rc geninfo_all_blocks=1 00:07:55.652 --rc geninfo_unexecuted_blocks=1 00:07:55.652 00:07:55.652 ' 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:55.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.652 --rc genhtml_branch_coverage=1 00:07:55.652 --rc genhtml_function_coverage=1 00:07:55.652 --rc genhtml_legend=1 00:07:55.652 --rc geninfo_all_blocks=1 00:07:55.652 --rc geninfo_unexecuted_blocks=1 00:07:55.652 00:07:55.652 ' 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:55.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.652 --rc genhtml_branch_coverage=1 00:07:55.652 --rc genhtml_function_coverage=1 00:07:55.652 --rc genhtml_legend=1 00:07:55.652 --rc geninfo_all_blocks=1 00:07:55.652 --rc geninfo_unexecuted_blocks=1 00:07:55.652 00:07:55.652 ' 00:07:55.652 00:53:18 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:55.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:55.652 --rc genhtml_branch_coverage=1 00:07:55.652 --rc genhtml_function_coverage=1 00:07:55.652 --rc genhtml_legend=1 00:07:55.652 --rc geninfo_all_blocks=1 00:07:55.652 --rc geninfo_unexecuted_blocks=1 00:07:55.652 00:07:55.652 ' 00:07:55.652 00:53:18 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:56.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:56.484 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.744 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.744 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.744 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:56.744 00:53:19 nvme -- nvme/nvme.sh@79 -- # uname 00:07:56.744 00:53:19 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:56.744 00:53:19 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:56.744 00:53:19 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:56.744 00:53:19 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:56.744 00:53:19 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:56.744 00:53:19 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:56.744 Waiting for stub to ready for secondary processes... 00:07:56.744 00:53:19 nvme -- common/autotest_common.sh@1075 -- # stubpid=76226 00:07:56.744 00:53:19 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:56.744 00:53:19 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:56.744 00:53:19 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/76226 ]] 00:07:56.744 00:53:19 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:56.745 00:53:19 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:56.745 [2024-11-26 00:53:19.567586] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:07:56.745 [2024-11-26 00:53:19.567699] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:57.680 [2024-11-26 00:53:20.267917] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.680 [2024-11-26 00:53:20.298208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.680 [2024-11-26 00:53:20.310829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.680 [2024-11-26 00:53:20.310995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.680 [2024-11-26 00:53:20.311037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.680 [2024-11-26 00:53:20.320882] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:57.680 [2024-11-26 00:53:20.320923] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:57.680 [2024-11-26 00:53:20.332737] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:57.680 [2024-11-26 00:53:20.332947] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:57.680 [2024-11-26 00:53:20.333444] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:57.680 [2024-11-26 00:53:20.333600] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:57.680 [2024-11-26 00:53:20.333633] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:57.680 [2024-11-26 00:53:20.334119] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:57.680 [2024-11-26 00:53:20.334227] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:57.680 [2024-11-26 00:53:20.334264] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:57.680 [2024-11-26 00:53:20.335229] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:57.680 [2024-11-26 00:53:20.335597] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:57.680 [2024-11-26 00:53:20.335737] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:57.680 [2024-11-26 00:53:20.335837] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:57.680 [2024-11-26 00:53:20.336035] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:57.680 00:53:20 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:57.680 done. 00:07:57.680 00:53:20 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:57.680 00:53:20 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:57.680 00:53:20 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:57.680 00:53:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.680 00:53:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.680 ************************************ 00:07:57.680 START TEST nvme_reset 00:07:57.680 ************************************ 00:07:57.680 00:53:20 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:57.940 Initializing NVMe Controllers 00:07:57.940 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:57.940 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:57.940 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:57.940 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:57.940 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:57.940 00:07:57.940 real 0m0.197s 00:07:57.940 user 0m0.068s 00:07:57.940 sys 0m0.089s 00:07:57.940 00:53:20 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.940 00:53:20 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:57.940 ************************************ 00:07:57.940 END TEST nvme_reset 00:07:57.940 ************************************ 00:07:57.940 00:53:20 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:57.940 00:53:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.940 00:53:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.940 00:53:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.940 ************************************ 00:07:57.940 START TEST nvme_identify 00:07:57.940 ************************************ 00:07:57.940 00:53:20 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:57.940 00:53:20 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:57.940 00:53:20 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:57.940 00:53:20 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:57.940 00:53:20 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:57.940 00:53:20 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:57.940 00:53:20 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:57.940 00:53:20 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:57.940 00:53:20 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:57.940 00:53:20 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:57.940 00:53:20 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:57.940 00:53:20 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:57.940 00:53:20 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:58.202 ===================================================== 00:07:58.202 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.202 ===================================================== 00:07:58.202 Controller Capabilities/Features 00:07:58.202 ================================ 00:07:58.202 Vendor ID: 1b36 00:07:58.202 Subsystem Vendor ID: 1af4 00:07:58.202 Serial Number: 12340 00:07:58.202 Model Number: QEMU NVMe Ctrl 00:07:58.202 Firmware Version: 8.0.0 00:07:58.202 Recommended Arb Burst: 6 00:07:58.202 IEEE OUI Identifier: 00 54 52 00:07:58.202 Multi-path I/O 00:07:58.202 May have multiple subsystem ports: No 00:07:58.202 May have multiple controllers: No 00:07:58.202 Associated with SR-IOV VF: No 00:07:58.202 Max Data Transfer Size: 524288 00:07:58.202 Max Number of Namespaces: 256 00:07:58.202 Max Number of I/O Queues: 64 00:07:58.202 NVMe Specification Version (VS): 1.4 00:07:58.202 NVMe Specification Version (Identify): 1.4 00:07:58.202 Maximum Queue Entries: 2048 00:07:58.202 Contiguous Queues Required: Yes 00:07:58.202 Arbitration Mechanisms Supported 00:07:58.202 Weighted Round Robin: Not Supported 00:07:58.202 Vendor Specific: Not Supported 00:07:58.202 Reset Timeout: 7500 ms 00:07:58.202 Doorbell Stride: 4 bytes 00:07:58.202 NVM Subsystem Reset: Not Supported 00:07:58.202 Command Sets Supported 00:07:58.202 NVM Command Set: Supported 00:07:58.202 Boot Partition: Not Supported 00:07:58.202 Memory Page Size Minimum: 4096 bytes 00:07:58.202 Memory Page Size Maximum: 65536 bytes 00:07:58.202 Persistent Memory Region: Not Supported 00:07:58.202 Optional Asynchronous Events Supported 00:07:58.202 Namespace Attribute Notices: Supported 00:07:58.202 Firmware Activation Notices: Not Supported 00:07:58.202 ANA Change Notices: Not Supported 00:07:58.202 PLE Aggregate Log Change Notices: Not Supported 00:07:58.202 LBA Status Info Alert Notices: Not Supported 00:07:58.202 EGE Aggregate Log Change Notices: Not Supported 00:07:58.202 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.202 Zone Descriptor Change Notices: Not Supported 00:07:58.202 Discovery Log Change Notices: Not Supported 00:07:58.202 Controller Attributes 00:07:58.202 128-bit Host Identifier: Not Supported 00:07:58.202 Non-Operational Permissive Mode: Not Supported 00:07:58.202 NVM Sets: Not Supported 00:07:58.202 Read Recovery Levels: Not Supported 00:07:58.202 Endurance Groups: Not Supported 00:07:58.202 Predictable Latency Mode: Not Supported 00:07:58.202 Traffic Based Keep ALive: Not Supported 00:07:58.202 Namespace Granularity: Not Supported 00:07:58.202 SQ Associations: Not Supported 00:07:58.202 UUID List: Not Supported 00:07:58.202 Multi-Domain Subsystem: Not Supported 00:07:58.202 Fixed Capacity Management: Not Supported 00:07:58.202 Variable Capacity Management: Not Supported 00:07:58.202 Delete Endurance Group: Not Supported 00:07:58.202 Delete NVM Set: Not Supported 00:07:58.203 Extended LBA Formats Supported: Supported 00:07:58.203 Flexible Data Placement Supported: Not Supported 00:07:58.203 00:07:58.203 Controller Memory Buffer Support 00:07:58.203 ================================ 00:07:58.203 Supported: No 00:07:58.203 00:07:58.203 Persistent Memory Region Support 00:07:58.203 ================================ 00:07:58.203 Supported: No 00:07:58.203 00:07:58.203 Admin Command Set Attributes 00:07:58.203 ============================ 00:07:58.203 Security Send/Receive: Not Supported 00:07:58.203 Format NVM: Supported 00:07:58.203 Firmware Activate/Download: Not Supported 00:07:58.203 Namespace Management: Supported 00:07:58.203 Device Self-Test: Not Supported 00:07:58.203 Directives: Supported 00:07:58.203 NVMe-MI: Not Supported 00:07:58.203 Virtualization Management: Not Supported 00:07:58.203 Doorbell Buffer Config: Supported 00:07:58.203 Get LBA Status Capability: Not Supported 00:07:58.203 Command & Feature Lockdown Capability: Not Supported 00:07:58.203 Abort Command Limit: 4 00:07:58.203 Async Event Request Limit: 4 00:07:58.203 Number of Firmware Slots: N/A 00:07:58.203 Firmware Slot 1 Read-Only: N/A 00:07:58.203 Firmware Activation Without Reset: N/A 00:07:58.203 Multiple Update Detection Support: N/A 00:07:58.203 Firmware Update Granularity: No Information Provided 00:07:58.203 Per-Namespace SMART Log: Yes 00:07:58.203 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.203 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:58.203 Command Effects Log Page: Supported 00:07:58.203 Get Log Page Extended Data: Supported 00:07:58.203 Telemetry Log Pages: Not Supported 00:07:58.203 Persistent Event Log Pages: Not Supported 00:07:58.203 Supported Log Pages Log Page: May Support 00:07:58.203 Commands Supported & Effects Log Page: Not Supported 00:07:58.203 Feature Identifiers & Effects Log Page:May Support 00:07:58.203 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.203 Data Area 4 for Telemetry Log: Not Supported 00:07:58.203 Error Log Page Entries Supported: 1 00:07:58.203 Keep Alive: Not Supported 00:07:58.203 00:07:58.203 NVM Command Set Attributes 00:07:58.203 ========================== 00:07:58.203 Submission Queue Entry Size 00:07:58.203 Max: 64 00:07:58.203 Min: 64 00:07:58.203 Completion Queue Entry Size 00:07:58.203 Max: 16 00:07:58.203 Min: 16 00:07:58.203 Number of Namespaces: 256 00:07:58.203 Compare Command: Supported 00:07:58.203 Write Uncorrectable Command: Not Supported 00:07:58.203 Dataset Management Command: Supported 00:07:58.203 Write Zeroes Command: Supported 00:07:58.203 Set Features Save Field: Supported 00:07:58.203 Reservations: Not Supported 00:07:58.203 Timestamp: Supported 00:07:58.203 Copy: Supported 00:07:58.203 Volatile Write Cache: Present 00:07:58.203 Atomic Write Unit (Normal): 1 00:07:58.203 Atomic Write Unit (PFail): 1 00:07:58.203 Atomic Compare & Write Unit: 1 00:07:58.203 Fused Compare & Write: Not Supported 00:07:58.203 Scatter-Gather List 00:07:58.203 SGL Command Set: Supported 00:07:58.203 SGL Keyed: Not Supported 00:07:58.203 SGL Bit Bucket Descriptor: Not Supported 00:07:58.203 SGL Metadata Pointer: Not Supported 00:07:58.203 Oversized SGL: Not Supported 00:07:58.203 SGL Metadata Address: Not Supported 00:07:58.203 SGL Offset: Not Supported 00:07:58.203 Transport SGL Data Block: Not Supported 00:07:58.203 Replay Protected Memory Block: Not Supported 00:07:58.203 00:07:58.203 Firmware Slot Information 00:07:58.203 ========================= 00:07:58.203 Active slot: 1 00:07:58.203 Slot 1 Firmware Revision: 1.0 00:07:58.203 00:07:58.203 00:07:58.203 Commands Supported and Effects 00:07:58.203 ============================== 00:07:58.203 Admin Commands 00:07:58.203 -------------- 00:07:58.203 Delete I/O Submission Queue (00h): Supported 00:07:58.203 Create I/O Submission Queue (01h): Supported 00:07:58.203 Get Log Page (02h): Supported 00:07:58.203 Delete I/O Completion Queue (04h): Supported 00:07:58.203 Create I/O Completion Queue (05h): Supported 00:07:58.203 Identify (06h): Supported 00:07:58.203 Abort (08h): Supported 00:07:58.203 Set Features (09h): Supported 00:07:58.203 Get Features (0Ah): Supported 00:07:58.203 Asynchronous Event Request (0Ch): Supported 00:07:58.203 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.203 Directive Send (19h): Supported 00:07:58.203 Directive Receive (1Ah): Supported 00:07:58.203 Virtualization Management (1Ch): Supported 00:07:58.203 Doorbell Buffer Config (7Ch): Supported 00:07:58.203 Format NVM (80h): Supported LBA-Change 00:07:58.203 I/O Commands 00:07:58.203 ------------ 00:07:58.203 Flush (00h): Supported LBA-Change 00:07:58.203 Write (01h): Supported LBA-Change 00:07:58.203 Read (02h): Supported 00:07:58.203 Compare (05h): Supported 00:07:58.203 Write Zeroes (08h): Supported LBA-Change 00:07:58.203 Dataset Management (09h): Supported LBA-Change 00:07:58.203 Unknown (0Ch): Supported 00:07:58.203 Unknown (12h): Supported 00:07:58.203 Copy (19h): Supported LBA-Change 00:07:58.203 Unknown (1Dh): Supported LBA-Change 00:07:58.203 00:07:58.203 Error Log 00:07:58.203 ========= 00:07:58.203 00:07:58.203 Arbitration 00:07:58.203 =========== 00:07:58.203 Arbitration Burst: no limit 00:07:58.203 00:07:58.203 Power Management 00:07:58.203 ================ 00:07:58.203 Number of Power States: 1 00:07:58.203 Current Power State: Power State #0 00:07:58.203 Power State #0: 00:07:58.203 Max Power: 25.00 W 00:07:58.203 Non-Operational State: Operational 00:07:58.203 Entry Latency: 16 microseconds 00:07:58.203 Exit Latency: 4 microseconds 00:07:58.203 Relative Read Throughput: 0 00:07:58.203 Relative Read Latency: 0 00:07:58.203 Relative Write Throughput: 0 00:07:58.203 Relative Write Latency: 0 00:07:58.203 Idle Power[2024-11-26 00:53:21.009438] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 76247 terminated unexpected 00:07:58.203 [2024-11-26 00:53:21.010626] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 76247 terminated unexpected 00:07:58.203 : Not Reported 00:07:58.203 Active Power: Not Reported 00:07:58.203 Non-Operational Permissive Mode: Not Supported 00:07:58.203 00:07:58.203 Health Information 00:07:58.203 ================== 00:07:58.203 Critical Warnings: 00:07:58.203 Available Spare Space: OK 00:07:58.203 Temperature: OK 00:07:58.203 Device Reliability: OK 00:07:58.203 Read Only: No 00:07:58.203 Volatile Memory Backup: OK 00:07:58.203 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.203 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.203 Available Spare: 0% 00:07:58.203 Available Spare Threshold: 0% 00:07:58.203 Life Percentage Used: 0% 00:07:58.203 Data Units Read: 671 00:07:58.203 Data Units Written: 599 00:07:58.203 Host Read Commands: 36278 00:07:58.203 Host Write Commands: 36064 00:07:58.203 Controller Busy Time: 0 minutes 00:07:58.203 Power Cycles: 0 00:07:58.203 Power On Hours: 0 hours 00:07:58.203 Unsafe Shutdowns: 0 00:07:58.203 Unrecoverable Media Errors: 0 00:07:58.203 Lifetime Error Log Entries: 0 00:07:58.203 Warning Temperature Time: 0 minutes 00:07:58.203 Critical Temperature Time: 0 minutes 00:07:58.203 00:07:58.203 Number of Queues 00:07:58.203 ================ 00:07:58.203 Number of I/O Submission Queues: 64 00:07:58.203 Number of I/O Completion Queues: 64 00:07:58.203 00:07:58.203 ZNS Specific Controller Data 00:07:58.203 ============================ 00:07:58.203 Zone Append Size Limit: 0 00:07:58.203 00:07:58.203 00:07:58.203 Active Namespaces 00:07:58.203 ================= 00:07:58.203 Namespace ID:1 00:07:58.203 Error Recovery Timeout: Unlimited 00:07:58.203 Command Set Identifier: NVM (00h) 00:07:58.203 Deallocate: Supported 00:07:58.203 Deallocated/Unwritten Error: Supported 00:07:58.203 Deallocated Read Value: All 0x00 00:07:58.203 Deallocate in Write Zeroes: Not Supported 00:07:58.203 Deallocated Guard Field: 0xFFFF 00:07:58.203 Flush: Supported 00:07:58.203 Reservation: Not Supported 00:07:58.203 Metadata Transferred as: Separate Metadata Buffer 00:07:58.203 Namespace Sharing Capabilities: Private 00:07:58.203 Size (in LBAs): 1548666 (5GiB) 00:07:58.203 Capacity (in LBAs): 1548666 (5GiB) 00:07:58.203 Utilization (in LBAs): 1548666 (5GiB) 00:07:58.203 Thin Provisioning: Not Supported 00:07:58.203 Per-NS Atomic Units: No 00:07:58.203 Maximum Single Source Range Length: 128 00:07:58.203 Maximum Copy Length: 128 00:07:58.203 Maximum Source Range Count: 128 00:07:58.203 NGUID/EUI64 Never Reused: No 00:07:58.203 Namespace Write Protected: No 00:07:58.203 Number of LBA Formats: 8 00:07:58.203 Current LBA Format: LBA Format #07 00:07:58.203 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.204 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.204 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.204 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.204 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.204 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.204 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.204 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.204 00:07:58.204 NVM Specific Namespace Data 00:07:58.204 =========================== 00:07:58.204 Logical Block Storage Tag Mask: 0 00:07:58.204 Protection Information Capabilities: 00:07:58.204 16b Guard Protection Information Storage Tag Support: No 00:07:58.204 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.204 Storage Tag Check Read Support: No 00:07:58.204 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.204 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.204 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.204 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.204 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.204 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.204 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.204 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.204 ===================================================== 00:07:58.204 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.204 ===================================================== 00:07:58.204 Controller Capabilities/Features 00:07:58.204 ================================ 00:07:58.204 Vendor ID: 1b36 00:07:58.204 Subsystem Vendor ID: 1af4 00:07:58.204 Serial Number: 12341 00:07:58.204 Model Number: QEMU NVMe Ctrl 00:07:58.204 Firmware Version: 8.0.0 00:07:58.204 Recommended Arb Burst: 6 00:07:58.204 IEEE OUI Identifier: 00 54 52 00:07:58.204 Multi-path I/O 00:07:58.204 May have multiple subsystem ports: No 00:07:58.204 May have multiple controllers: No 00:07:58.204 Associated with SR-IOV VF: No 00:07:58.204 Max Data Transfer Size: 524288 00:07:58.204 Max Number of Namespaces: 256 00:07:58.204 Max Number of I/O Queues: 64 00:07:58.204 NVMe Specification Version (VS): 1.4 00:07:58.204 NVMe Specification Version (Identify): 1.4 00:07:58.204 Maximum Queue Entries: 2048 00:07:58.204 Contiguous Queues Required: Yes 00:07:58.204 Arbitration Mechanisms Supported 00:07:58.204 Weighted Round Robin: Not Supported 00:07:58.204 Vendor Specific: Not Supported 00:07:58.204 Reset Timeout: 7500 ms 00:07:58.204 Doorbell Stride: 4 bytes 00:07:58.204 NVM Subsystem Reset: Not Supported 00:07:58.204 Command Sets Supported 00:07:58.204 NVM Command Set: Supported 00:07:58.204 Boot Partition: Not Supported 00:07:58.204 Memory Page Size Minimum: 4096 bytes 00:07:58.204 Memory Page Size Maximum: 65536 bytes 00:07:58.204 Persistent Memory Region: Not Supported 00:07:58.204 Optional Asynchronous Events Supported 00:07:58.204 Namespace Attribute Notices: Supported 00:07:58.204 Firmware Activation Notices: Not Supported 00:07:58.204 ANA Change Notices: Not Supported 00:07:58.204 PLE Aggregate Log Change Notices: Not Supported 00:07:58.204 LBA Status Info Alert Notices: Not Supported 00:07:58.204 EGE Aggregate Log Change Notices: Not Supported 00:07:58.204 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.204 Zone Descriptor Change Notices: Not Supported 00:07:58.204 Discovery Log Change Notices: Not Supported 00:07:58.204 Controller Attributes 00:07:58.204 128-bit Host Identifier: Not Supported 00:07:58.204 Non-Operational Permissive Mode: Not Supported 00:07:58.204 NVM Sets: Not Supported 00:07:58.204 Read Recovery Levels: Not Supported 00:07:58.204 Endurance Groups: Not Supported 00:07:58.204 Predictable Latency Mode: Not Supported 00:07:58.204 Traffic Based Keep ALive: Not Supported 00:07:58.204 Namespace Granularity: Not Supported 00:07:58.204 SQ Associations: Not Supported 00:07:58.204 UUID List: Not Supported 00:07:58.204 Multi-Domain Subsystem: Not Supported 00:07:58.204 Fixed Capacity Management: Not Supported 00:07:58.204 Variable Capacity Management: Not Supported 00:07:58.204 Delete Endurance Group: Not Supported 00:07:58.204 Delete NVM Set: Not Supported 00:07:58.204 Extended LBA Formats Supported: Supported 00:07:58.204 Flexible Data Placement Supported: Not Supported 00:07:58.204 00:07:58.204 Controller Memory Buffer Support 00:07:58.204 ================================ 00:07:58.204 Supported: No 00:07:58.204 00:07:58.204 Persistent Memory Region Support 00:07:58.204 ================================ 00:07:58.204 Supported: No 00:07:58.204 00:07:58.204 Admin Command Set Attributes 00:07:58.204 ============================ 00:07:58.204 Security Send/Receive: Not Supported 00:07:58.204 Format NVM: Supported 00:07:58.204 Firmware Activate/Download: Not Supported 00:07:58.204 Namespace Management: Supported 00:07:58.204 Device Self-Test: Not Supported 00:07:58.204 Directives: Supported 00:07:58.204 NVMe-MI: Not Supported 00:07:58.204 Virtualization Management: Not Supported 00:07:58.204 Doorbell Buffer Config: Supported 00:07:58.204 Get LBA Status Capability: Not Supported 00:07:58.204 Command & Feature Lockdown Capability: Not Supported 00:07:58.204 Abort Command Limit: 4 00:07:58.204 Async Event Request Limit: 4 00:07:58.204 Number of Firmware Slots: N/A 00:07:58.204 Firmware Slot 1 Read-Only: N/A 00:07:58.204 Firmware Activation Without Reset: N/A 00:07:58.204 Multiple Update Detection Support: N/A 00:07:58.204 Firmware Update Granularity: No Information Provided 00:07:58.204 Per-Namespace SMART Log: Yes 00:07:58.204 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.204 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:58.204 Command Effects Log Page: Supported 00:07:58.204 Get Log Page Extended Data: Supported 00:07:58.204 Telemetry Log Pages: Not Supported 00:07:58.204 Persistent Event Log Pages: Not Supported 00:07:58.204 Supported Log Pages Log Page: May Support 00:07:58.204 Commands Supported & Effects Log Page: Not Supported 00:07:58.204 Feature Identifiers & Effects Log Page:May Support 00:07:58.204 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.204 Data Area 4 for Telemetry Log: Not Supported 00:07:58.204 Error Log Page Entries Supported: 1 00:07:58.204 Keep Alive: Not Supported 00:07:58.204 00:07:58.204 NVM Command Set Attributes 00:07:58.204 ========================== 00:07:58.204 Submission Queue Entry Size 00:07:58.204 Max: 64 00:07:58.204 Min: 64 00:07:58.204 Completion Queue Entry Size 00:07:58.204 Max: 16 00:07:58.204 Min: 16 00:07:58.204 Number of Namespaces: 256 00:07:58.204 Compare Command: Supported 00:07:58.204 Write Uncorrectable Command: Not Supported 00:07:58.204 Dataset Management Command: Supported 00:07:58.204 Write Zeroes Command: Supported 00:07:58.204 Set Features Save Field: Supported 00:07:58.204 Reservations: Not Supported 00:07:58.204 Timestamp: Supported 00:07:58.204 Copy: Supported 00:07:58.204 Volatile Write Cache: Present 00:07:58.204 Atomic Write Unit (Normal): 1 00:07:58.204 Atomic Write Unit (PFail): 1 00:07:58.204 Atomic Compare & Write Unit: 1 00:07:58.204 Fused Compare & Write: Not Supported 00:07:58.204 Scatter-Gather List 00:07:58.204 SGL Command Set: Supported 00:07:58.204 SGL Keyed: Not Supported 00:07:58.204 SGL Bit Bucket Descriptor: Not Supported 00:07:58.204 SGL Metadata Pointer: Not Supported 00:07:58.204 Oversized SGL: Not Supported 00:07:58.204 SGL Metadata Address: Not Supported 00:07:58.204 SGL Offset: Not Supported 00:07:58.204 Transport SGL Data Block: Not Supported 00:07:58.204 Replay Protected Memory Block: Not Supported 00:07:58.204 00:07:58.204 Firmware Slot Information 00:07:58.204 ========================= 00:07:58.204 Active slot: 1 00:07:58.204 Slot 1 Firmware Revision: 1.0 00:07:58.204 00:07:58.204 00:07:58.204 Commands Supported and Effects 00:07:58.204 ============================== 00:07:58.204 Admin Commands 00:07:58.204 -------------- 00:07:58.204 Delete I/O Submission Queue (00h): Supported 00:07:58.204 Create I/O Submission Queue (01h): Supported 00:07:58.204 Get Log Page (02h): Supported 00:07:58.204 Delete I/O Completion Queue (04h): Supported 00:07:58.204 Create I/O Completion Queue (05h): Supported 00:07:58.204 Identify (06h): Supported 00:07:58.204 Abort (08h): Supported 00:07:58.204 Set Features (09h): Supported 00:07:58.204 Get Features (0Ah): Supported 00:07:58.204 Asynchronous Event Request (0Ch): Supported 00:07:58.204 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.204 Directive Send (19h): Supported 00:07:58.204 Directive Receive (1Ah): Supported 00:07:58.204 Virtualization Management (1Ch): Supported 00:07:58.204 Doorbell Buffer Config (7Ch): Supported 00:07:58.204 Format NVM (80h): Supported LBA-Change 00:07:58.204 I/O Commands 00:07:58.205 ------------ 00:07:58.205 Flush (00h): Supported LBA-Change 00:07:58.205 Write (01h): Supported LBA-Change 00:07:58.205 Read (02h): Supported 00:07:58.205 Compare (05h): Supported 00:07:58.205 Write Zeroes (08h): Supported LBA-Change 00:07:58.205 Dataset Management (09h): Supported LBA-Change 00:07:58.205 Unknown (0Ch): Supported 00:07:58.205 Unknown (12h): Supported 00:07:58.205 Copy (19h): Supported LBA-Change 00:07:58.205 Unknown (1Dh): Supported LBA-Change 00:07:58.205 00:07:58.205 Error Log 00:07:58.205 ========= 00:07:58.205 00:07:58.205 Arbitration 00:07:58.205 =========== 00:07:58.205 Arbitration Burst: no limit 00:07:58.205 00:07:58.205 Power Management 00:07:58.205 ================ 00:07:58.205 Number of Power States: 1 00:07:58.205 Current Power State: Power State #0 00:07:58.205 Power State #0: 00:07:58.205 Max Power: 25.00 W 00:07:58.205 Non-Operational State: Operational 00:07:58.205 Entry Latency: 16 microseconds 00:07:58.205 Exit Latency: 4 microseconds 00:07:58.205 Relative Read Throughput: 0 00:07:58.205 Relative Read Latency: 0 00:07:58.205 Relative Write Throughput: 0 00:07:58.205 Relative Write Latency: 0 00:07:58.205 Idle Power: Not Reported 00:07:58.205 Active Power: Not Reported 00:07:58.205 Non-Operational Permissive Mode: Not Supported 00:07:58.205 00:07:58.205 Health Information 00:07:58.205 ================== 00:07:58.205 Critical Warnings: 00:07:58.205 Available Spare Space: OK 00:07:58.205 Temperature: OK 00:07:58.205 Device Reliability: OK 00:07:58.205 Read Only: No 00:07:58.205 Volatile Memory Backup: OK 00:07:58.205 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.205 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.205 Available Spare: 0% 00:07:58.205 Available Spare Threshold: 0% 00:07:58.205 Life Percentage Used: 0% 00:07:58.205 Data Units Read: 1032 00:07:58.205 Data Units Written: 893 00:07:58.205 Host Read Commands: 53139 00:07:58.205 Host Write Commands: 51816 00:07:58.205 Controller Busy Time: 0 minutes 00:07:58.205 Power Cycles: 0 00:07:58.205 Power On Hours: 0 hours 00:07:58.205 Unsafe Shutdowns: 0 00:07:58.205 Unrecoverable Media Errors: 0 00:07:58.205 Lifetime Error Log Entries: 0 00:07:58.205 Warning Temperature Time: 0 minutes 00:07:58.205 Critical Temperature Time: 0 minutes 00:07:58.205 00:07:58.205 Number of Queues 00:07:58.205 ================ 00:07:58.205 Number of I/O Submission Queues: 64 00:07:58.205 Number of I/O Completion Queues: 64 00:07:58.205 00:07:58.205 ZNS Specific Controller Data 00:07:58.205 ============================ 00:07:58.205 Zone Append Size Limit: 0 00:07:58.205 00:07:58.205 00:07:58.205 Active Namespaces 00:07:58.205 ================= 00:07:58.205 Namespace ID:1 00:07:58.205 Error Recovery Timeout: Unlimited 00:07:58.205 Command Set Identifier: NVM (00h) 00:07:58.205 Deallocate: Supported 00:07:58.205 Deallocated/Unwritten Error: Supported 00:07:58.205 Deallocated Read Value: All 0x00 00:07:58.205 Deallocate in Write Zeroes: Not Supported 00:07:58.205 Deallocated Guard Field: 0xFFFF 00:07:58.205 Flush: Supported 00:07:58.205 Reservation: Not Supported 00:07:58.205 Namespace Sharing Capabilities: Private 00:07:58.205 Size (in LBAs): 1310720 (5GiB) 00:07:58.205 Capacity (in LBAs): 1310720 (5GiB) 00:07:58.205 Utilization (in LBAs): 1310720 (5GiB) 00:07:58.205 Thin Provisioning: Not Supported 00:07:58.205 Per-NS Atomic Units: No 00:07:58.205 Maximum Single Source Range Length: 128 00:07:58.205 Maximum Copy Length: 128 00:07:58.205 Maximum Source Range Count: 128 00:07:58.205 NGUID/EUI64 Never Reused: No 00:07:58.205 Namespace Write Protected: No 00:07:58.205 Number of LBA Formats: 8 00:07:58.205 Current LBA Format: LBA Format #04 00:07:58.205 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.205 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.205 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.205 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.205 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.205 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.205 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.205 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.205 00:07:58.205 NVM Specific Namespace Data 00:07:58.205 =========================== 00:07:58.205 Logical Block Storage Tag Mask: 0 00:07:58.205 Protection Information Capabilities: 00:07:58.205 16b Guard Protection Information Storage Tag Support: No 00:07:58.205 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.205 Storage Tag Check Read Support: No 00:07:58.205 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.205 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.205 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.205 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.205 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.205 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.205 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.205 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.205 ===================================================== 00:07:58.205 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.205 ===================================================== 00:07:58.205 Controller Capabilities/Features 00:07:58.205 ================================ 00:07:58.205 Vendor ID: 1b36 00:07:58.205 Subsystem Vendor ID: 1af4 00:07:58.205 Serial Number: 12343 00:07:58.205 Model Number: QEMU NVMe Ctrl 00:07:58.205 Firmware Version: 8.0.0 00:07:58.205 Recommended Arb Burst: 6 00:07:58.205 IEEE OUI Identifier: 00 54 52 00:07:58.205 Mu[2024-11-26 00:53:21.012484] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 76247 terminated unexpected 00:07:58.205 lti-path I/O 00:07:58.205 May have multiple subsystem ports: No 00:07:58.205 May have multiple controllers: Yes 00:07:58.205 Associated with SR-IOV VF: No 00:07:58.205 Max Data Transfer Size: 524288 00:07:58.205 Max Number of Namespaces: 256 00:07:58.205 Max Number of I/O Queues: 64 00:07:58.205 NVMe Specification Version (VS): 1.4 00:07:58.205 NVMe Specification Version (Identify): 1.4 00:07:58.205 Maximum Queue Entries: 2048 00:07:58.205 Contiguous Queues Required: Yes 00:07:58.205 Arbitration Mechanisms Supported 00:07:58.205 Weighted Round Robin: Not Supported 00:07:58.205 Vendor Specific: Not Supported 00:07:58.205 Reset Timeout: 7500 ms 00:07:58.205 Doorbell Stride: 4 bytes 00:07:58.205 NVM Subsystem Reset: Not Supported 00:07:58.205 Command Sets Supported 00:07:58.205 NVM Command Set: Supported 00:07:58.205 Boot Partition: Not Supported 00:07:58.205 Memory Page Size Minimum: 4096 bytes 00:07:58.205 Memory Page Size Maximum: 65536 bytes 00:07:58.205 Persistent Memory Region: Not Supported 00:07:58.205 Optional Asynchronous Events Supported 00:07:58.205 Namespace Attribute Notices: Supported 00:07:58.205 Firmware Activation Notices: Not Supported 00:07:58.205 ANA Change Notices: Not Supported 00:07:58.205 PLE Aggregate Log Change Notices: Not Supported 00:07:58.205 LBA Status Info Alert Notices: Not Supported 00:07:58.205 EGE Aggregate Log Change Notices: Not Supported 00:07:58.205 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.205 Zone Descriptor Change Notices: Not Supported 00:07:58.205 Discovery Log Change Notices: Not Supported 00:07:58.205 Controller Attributes 00:07:58.205 128-bit Host Identifier: Not Supported 00:07:58.205 Non-Operational Permissive Mode: Not Supported 00:07:58.205 NVM Sets: Not Supported 00:07:58.205 Read Recovery Levels: Not Supported 00:07:58.205 Endurance Groups: Supported 00:07:58.205 Predictable Latency Mode: Not Supported 00:07:58.205 Traffic Based Keep ALive: Not Supported 00:07:58.205 Namespace Granularity: Not Supported 00:07:58.205 SQ Associations: Not Supported 00:07:58.205 UUID List: Not Supported 00:07:58.205 Multi-Domain Subsystem: Not Supported 00:07:58.205 Fixed Capacity Management: Not Supported 00:07:58.205 Variable Capacity Management: Not Supported 00:07:58.205 Delete Endurance Group: Not Supported 00:07:58.205 Delete NVM Set: Not Supported 00:07:58.205 Extended LBA Formats Supported: Supported 00:07:58.205 Flexible Data Placement Supported: Supported 00:07:58.205 00:07:58.205 Controller Memory Buffer Support 00:07:58.205 ================================ 00:07:58.205 Supported: No 00:07:58.205 00:07:58.205 Persistent Memory Region Support 00:07:58.205 ================================ 00:07:58.205 Supported: No 00:07:58.205 00:07:58.205 Admin Command Set Attributes 00:07:58.205 ============================ 00:07:58.205 Security Send/Receive: Not Supported 00:07:58.205 Format NVM: Supported 00:07:58.206 Firmware Activate/Download: Not Supported 00:07:58.206 Namespace Management: Supported 00:07:58.206 Device Self-Test: Not Supported 00:07:58.206 Directives: Supported 00:07:58.206 NVMe-MI: Not Supported 00:07:58.206 Virtualization Management: Not Supported 00:07:58.206 Doorbell Buffer Config: Supported 00:07:58.206 Get LBA Status Capability: Not Supported 00:07:58.206 Command & Feature Lockdown Capability: Not Supported 00:07:58.206 Abort Command Limit: 4 00:07:58.206 Async Event Request Limit: 4 00:07:58.206 Number of Firmware Slots: N/A 00:07:58.206 Firmware Slot 1 Read-Only: N/A 00:07:58.206 Firmware Activation Without Reset: N/A 00:07:58.206 Multiple Update Detection Support: N/A 00:07:58.206 Firmware Update Granularity: No Information Provided 00:07:58.206 Per-Namespace SMART Log: Yes 00:07:58.206 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.206 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:58.206 Command Effects Log Page: Supported 00:07:58.206 Get Log Page Extended Data: Supported 00:07:58.206 Telemetry Log Pages: Not Supported 00:07:58.206 Persistent Event Log Pages: Not Supported 00:07:58.206 Supported Log Pages Log Page: May Support 00:07:58.206 Commands Supported & Effects Log Page: Not Supported 00:07:58.206 Feature Identifiers & Effects Log Page:May Support 00:07:58.206 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.206 Data Area 4 for Telemetry Log: Not Supported 00:07:58.206 Error Log Page Entries Supported: 1 00:07:58.206 Keep Alive: Not Supported 00:07:58.206 00:07:58.206 NVM Command Set Attributes 00:07:58.206 ========================== 00:07:58.206 Submission Queue Entry Size 00:07:58.206 Max: 64 00:07:58.206 Min: 64 00:07:58.206 Completion Queue Entry Size 00:07:58.206 Max: 16 00:07:58.206 Min: 16 00:07:58.206 Number of Namespaces: 256 00:07:58.206 Compare Command: Supported 00:07:58.206 Write Uncorrectable Command: Not Supported 00:07:58.206 Dataset Management Command: Supported 00:07:58.206 Write Zeroes Command: Supported 00:07:58.206 Set Features Save Field: Supported 00:07:58.206 Reservations: Not Supported 00:07:58.206 Timestamp: Supported 00:07:58.206 Copy: Supported 00:07:58.206 Volatile Write Cache: Present 00:07:58.206 Atomic Write Unit (Normal): 1 00:07:58.206 Atomic Write Unit (PFail): 1 00:07:58.206 Atomic Compare & Write Unit: 1 00:07:58.206 Fused Compare & Write: Not Supported 00:07:58.206 Scatter-Gather List 00:07:58.206 SGL Command Set: Supported 00:07:58.206 SGL Keyed: Not Supported 00:07:58.206 SGL Bit Bucket Descriptor: Not Supported 00:07:58.206 SGL Metadata Pointer: Not Supported 00:07:58.206 Oversized SGL: Not Supported 00:07:58.206 SGL Metadata Address: Not Supported 00:07:58.206 SGL Offset: Not Supported 00:07:58.206 Transport SGL Data Block: Not Supported 00:07:58.206 Replay Protected Memory Block: Not Supported 00:07:58.206 00:07:58.206 Firmware Slot Information 00:07:58.206 ========================= 00:07:58.206 Active slot: 1 00:07:58.206 Slot 1 Firmware Revision: 1.0 00:07:58.206 00:07:58.206 00:07:58.206 Commands Supported and Effects 00:07:58.206 ============================== 00:07:58.206 Admin Commands 00:07:58.206 -------------- 00:07:58.206 Delete I/O Submission Queue (00h): Supported 00:07:58.206 Create I/O Submission Queue (01h): Supported 00:07:58.206 Get Log Page (02h): Supported 00:07:58.206 Delete I/O Completion Queue (04h): Supported 00:07:58.206 Create I/O Completion Queue (05h): Supported 00:07:58.206 Identify (06h): Supported 00:07:58.206 Abort (08h): Supported 00:07:58.206 Set Features (09h): Supported 00:07:58.206 Get Features (0Ah): Supported 00:07:58.206 Asynchronous Event Request (0Ch): Supported 00:07:58.206 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.206 Directive Send (19h): Supported 00:07:58.206 Directive Receive (1Ah): Supported 00:07:58.206 Virtualization Management (1Ch): Supported 00:07:58.206 Doorbell Buffer Config (7Ch): Supported 00:07:58.206 Format NVM (80h): Supported LBA-Change 00:07:58.206 I/O Commands 00:07:58.206 ------------ 00:07:58.206 Flush (00h): Supported LBA-Change 00:07:58.206 Write (01h): Supported LBA-Change 00:07:58.206 Read (02h): Supported 00:07:58.206 Compare (05h): Supported 00:07:58.206 Write Zeroes (08h): Supported LBA-Change 00:07:58.206 Dataset Management (09h): Supported LBA-Change 00:07:58.206 Unknown (0Ch): Supported 00:07:58.206 Unknown (12h): Supported 00:07:58.206 Copy (19h): Supported LBA-Change 00:07:58.206 Unknown (1Dh): Supported LBA-Change 00:07:58.206 00:07:58.206 Error Log 00:07:58.206 ========= 00:07:58.206 00:07:58.206 Arbitration 00:07:58.206 =========== 00:07:58.206 Arbitration Burst: no limit 00:07:58.206 00:07:58.206 Power Management 00:07:58.206 ================ 00:07:58.206 Number of Power States: 1 00:07:58.206 Current Power State: Power State #0 00:07:58.206 Power State #0: 00:07:58.206 Max Power: 25.00 W 00:07:58.206 Non-Operational State: Operational 00:07:58.206 Entry Latency: 16 microseconds 00:07:58.206 Exit Latency: 4 microseconds 00:07:58.206 Relative Read Throughput: 0 00:07:58.206 Relative Read Latency: 0 00:07:58.206 Relative Write Throughput: 0 00:07:58.206 Relative Write Latency: 0 00:07:58.206 Idle Power: Not Reported 00:07:58.206 Active Power: Not Reported 00:07:58.206 Non-Operational Permissive Mode: Not Supported 00:07:58.206 00:07:58.206 Health Information 00:07:58.206 ================== 00:07:58.206 Critical Warnings: 00:07:58.206 Available Spare Space: OK 00:07:58.206 Temperature: OK 00:07:58.206 Device Reliability: OK 00:07:58.206 Read Only: No 00:07:58.206 Volatile Memory Backup: OK 00:07:58.206 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.206 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.206 Available Spare: 0% 00:07:58.206 Available Spare Threshold: 0% 00:07:58.206 Life Percentage Used: 0% 00:07:58.206 Data Units Read: 757 00:07:58.206 Data Units Written: 686 00:07:58.206 Host Read Commands: 37273 00:07:58.206 Host Write Commands: 36696 00:07:58.206 Controller Busy Time: 0 minutes 00:07:58.206 Power Cycles: 0 00:07:58.206 Power On Hours: 0 hours 00:07:58.206 Unsafe Shutdowns: 0 00:07:58.206 Unrecoverable Media Errors: 0 00:07:58.206 Lifetime Error Log Entries: 0 00:07:58.206 Warning Temperature Time: 0 minutes 00:07:58.206 Critical Temperature Time: 0 minutes 00:07:58.206 00:07:58.206 Number of Queues 00:07:58.206 ================ 00:07:58.206 Number of I/O Submission Queues: 64 00:07:58.206 Number of I/O Completion Queues: 64 00:07:58.206 00:07:58.206 ZNS Specific Controller Data 00:07:58.206 ============================ 00:07:58.206 Zone Append Size Limit: 0 00:07:58.206 00:07:58.206 00:07:58.206 Active Namespaces 00:07:58.206 ================= 00:07:58.206 Namespace ID:1 00:07:58.206 Error Recovery Timeout: Unlimited 00:07:58.206 Command Set Identifier: NVM (00h) 00:07:58.206 Deallocate: Supported 00:07:58.206 Deallocated/Unwritten Error: Supported 00:07:58.206 Deallocated Read Value: All 0x00 00:07:58.206 Deallocate in Write Zeroes: Not Supported 00:07:58.206 Deallocated Guard Field: 0xFFFF 00:07:58.206 Flush: Supported 00:07:58.206 Reservation: Not Supported 00:07:58.206 Namespace Sharing Capabilities: Multiple Controllers 00:07:58.206 Size (in LBAs): 262144 (1GiB) 00:07:58.206 Capacity (in LBAs): 262144 (1GiB) 00:07:58.206 Utilization (in LBAs): 262144 (1GiB) 00:07:58.206 Thin Provisioning: Not Supported 00:07:58.206 Per-NS Atomic Units: No 00:07:58.206 Maximum Single Source Range Length: 128 00:07:58.206 Maximum Copy Length: 128 00:07:58.206 Maximum Source Range Count: 128 00:07:58.206 NGUID/EUI64 Never Reused: No 00:07:58.206 Namespace Write Protected: No 00:07:58.206 Endurance group ID: 1 00:07:58.206 Number of LBA Formats: 8 00:07:58.206 Current LBA Format: LBA Format #04 00:07:58.206 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.206 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.206 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.206 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.206 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.206 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.206 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.206 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.206 00:07:58.206 Get Feature FDP: 00:07:58.206 ================ 00:07:58.206 Enabled: Yes 00:07:58.206 FDP configuration index: 0 00:07:58.206 00:07:58.206 FDP configurations log page 00:07:58.206 =========================== 00:07:58.206 Number of FDP configurations: 1 00:07:58.206 Version: 0 00:07:58.206 Size: 112 00:07:58.206 FDP Configuration Descriptor: 0 00:07:58.206 Descriptor Size: 96 00:07:58.207 Reclaim Group Identifier format: 2 00:07:58.207 FDP Volatile Write Cache: Not Present 00:07:58.207 FDP Configuration: Valid 00:07:58.207 Vendor Specific Size: 0 00:07:58.207 Number of Reclaim Groups: 2 00:07:58.207 Number of Recalim Unit Handles: 8 00:07:58.207 Max Placement Identifiers: 128 00:07:58.207 Number of Namespaces Suppprted: 256 00:07:58.207 Reclaim unit Nominal Size: 6000000 bytes 00:07:58.207 Estimated Reclaim Unit Time Limit: Not Reported 00:07:58.207 RUH Desc #000: RUH Type: Initially Isolated 00:07:58.207 RUH Desc #001: RUH Type: Initially Isolated 00:07:58.207 RUH Desc #002: RUH Type: Initially Isolated 00:07:58.207 RUH Desc #003: RUH Type: Initially Isolated 00:07:58.207 RUH Desc #004: RUH Type: Initially Isolated 00:07:58.207 RUH Desc #005: RUH Type: Initially Isolated 00:07:58.207 RUH Desc #006: RUH Type: Initially Isolated 00:07:58.207 RUH Desc #007: RUH Type: Initially Isolated 00:07:58.207 00:07:58.207 FDP reclaim unit handle usage log page 00:07:58.207 ====================================== 00:07:58.207 Number of Reclaim Unit Handles: 8 00:07:58.207 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:58.207 RUH Usage Desc #001: RUH Attributes: Unused 00:07:58.207 RUH Usage Desc #002: RUH Attributes: Unused 00:07:58.207 RUH Usage Desc #003: RUH Attributes: Unused 00:07:58.207 RUH Usage Desc #004: RUH Attributes: Unused 00:07:58.207 RUH Usage Desc #005: RUH Attributes: Unused 00:07:58.207 RUH Usage Desc #006: RUH Attributes: Unused 00:07:58.207 RUH Usage Desc #007: RUH Attributes: Unused 00:07:58.207 00:07:58.207 FDP statistics log page 00:07:58.207 ======================= 00:07:58.207 Host bytes with metadata written: 431955968 00:07:58.207 Medi[2024-11-26 00:53:21.013600] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 76247 terminated unexpected 00:07:58.207 a bytes with metadata written: 431988736 00:07:58.207 Media bytes erased: 0 00:07:58.207 00:07:58.207 FDP events log page 00:07:58.207 =================== 00:07:58.207 Number of FDP events: 0 00:07:58.207 00:07:58.207 NVM Specific Namespace Data 00:07:58.207 =========================== 00:07:58.207 Logical Block Storage Tag Mask: 0 00:07:58.207 Protection Information Capabilities: 00:07:58.207 16b Guard Protection Information Storage Tag Support: No 00:07:58.207 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.207 Storage Tag Check Read Support: No 00:07:58.207 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.207 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.207 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.207 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.207 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.207 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.207 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.207 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.207 ===================================================== 00:07:58.207 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.207 ===================================================== 00:07:58.207 Controller Capabilities/Features 00:07:58.207 ================================ 00:07:58.207 Vendor ID: 1b36 00:07:58.207 Subsystem Vendor ID: 1af4 00:07:58.207 Serial Number: 12342 00:07:58.207 Model Number: QEMU NVMe Ctrl 00:07:58.207 Firmware Version: 8.0.0 00:07:58.207 Recommended Arb Burst: 6 00:07:58.207 IEEE OUI Identifier: 00 54 52 00:07:58.207 Multi-path I/O 00:07:58.207 May have multiple subsystem ports: No 00:07:58.207 May have multiple controllers: No 00:07:58.207 Associated with SR-IOV VF: No 00:07:58.207 Max Data Transfer Size: 524288 00:07:58.207 Max Number of Namespaces: 256 00:07:58.207 Max Number of I/O Queues: 64 00:07:58.207 NVMe Specification Version (VS): 1.4 00:07:58.207 NVMe Specification Version (Identify): 1.4 00:07:58.207 Maximum Queue Entries: 2048 00:07:58.207 Contiguous Queues Required: Yes 00:07:58.207 Arbitration Mechanisms Supported 00:07:58.207 Weighted Round Robin: Not Supported 00:07:58.207 Vendor Specific: Not Supported 00:07:58.207 Reset Timeout: 7500 ms 00:07:58.207 Doorbell Stride: 4 bytes 00:07:58.207 NVM Subsystem Reset: Not Supported 00:07:58.207 Command Sets Supported 00:07:58.207 NVM Command Set: Supported 00:07:58.207 Boot Partition: Not Supported 00:07:58.207 Memory Page Size Minimum: 4096 bytes 00:07:58.207 Memory Page Size Maximum: 65536 bytes 00:07:58.207 Persistent Memory Region: Not Supported 00:07:58.207 Optional Asynchronous Events Supported 00:07:58.207 Namespace Attribute Notices: Supported 00:07:58.207 Firmware Activation Notices: Not Supported 00:07:58.207 ANA Change Notices: Not Supported 00:07:58.207 PLE Aggregate Log Change Notices: Not Supported 00:07:58.207 LBA Status Info Alert Notices: Not Supported 00:07:58.207 EGE Aggregate Log Change Notices: Not Supported 00:07:58.207 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.207 Zone Descriptor Change Notices: Not Supported 00:07:58.207 Discovery Log Change Notices: Not Supported 00:07:58.207 Controller Attributes 00:07:58.207 128-bit Host Identifier: Not Supported 00:07:58.207 Non-Operational Permissive Mode: Not Supported 00:07:58.207 NVM Sets: Not Supported 00:07:58.207 Read Recovery Levels: Not Supported 00:07:58.207 Endurance Groups: Not Supported 00:07:58.207 Predictable Latency Mode: Not Supported 00:07:58.207 Traffic Based Keep ALive: Not Supported 00:07:58.207 Namespace Granularity: Not Supported 00:07:58.207 SQ Associations: Not Supported 00:07:58.207 UUID List: Not Supported 00:07:58.207 Multi-Domain Subsystem: Not Supported 00:07:58.207 Fixed Capacity Management: Not Supported 00:07:58.207 Variable Capacity Management: Not Supported 00:07:58.207 Delete Endurance Group: Not Supported 00:07:58.207 Delete NVM Set: Not Supported 00:07:58.207 Extended LBA Formats Supported: Supported 00:07:58.207 Flexible Data Placement Supported: Not Supported 00:07:58.207 00:07:58.207 Controller Memory Buffer Support 00:07:58.207 ================================ 00:07:58.207 Supported: No 00:07:58.207 00:07:58.207 Persistent Memory Region Support 00:07:58.207 ================================ 00:07:58.207 Supported: No 00:07:58.207 00:07:58.207 Admin Command Set Attributes 00:07:58.207 ============================ 00:07:58.207 Security Send/Receive: Not Supported 00:07:58.207 Format NVM: Supported 00:07:58.207 Firmware Activate/Download: Not Supported 00:07:58.207 Namespace Management: Supported 00:07:58.207 Device Self-Test: Not Supported 00:07:58.207 Directives: Supported 00:07:58.207 NVMe-MI: Not Supported 00:07:58.207 Virtualization Management: Not Supported 00:07:58.207 Doorbell Buffer Config: Supported 00:07:58.207 Get LBA Status Capability: Not Supported 00:07:58.207 Command & Feature Lockdown Capability: Not Supported 00:07:58.207 Abort Command Limit: 4 00:07:58.207 Async Event Request Limit: 4 00:07:58.207 Number of Firmware Slots: N/A 00:07:58.207 Firmware Slot 1 Read-Only: N/A 00:07:58.207 Firmware Activation Without Reset: N/A 00:07:58.207 Multiple Update Detection Support: N/A 00:07:58.207 Firmware Update Granularity: No Information Provided 00:07:58.207 Per-Namespace SMART Log: Yes 00:07:58.207 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.207 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:58.207 Command Effects Log Page: Supported 00:07:58.207 Get Log Page Extended Data: Supported 00:07:58.207 Telemetry Log Pages: Not Supported 00:07:58.207 Persistent Event Log Pages: Not Supported 00:07:58.207 Supported Log Pages Log Page: May Support 00:07:58.208 Commands Supported & Effects Log Page: Not Supported 00:07:58.208 Feature Identifiers & Effects Log Page:May Support 00:07:58.208 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.208 Data Area 4 for Telemetry Log: Not Supported 00:07:58.208 Error Log Page Entries Supported: 1 00:07:58.208 Keep Alive: Not Supported 00:07:58.208 00:07:58.208 NVM Command Set Attributes 00:07:58.208 ========================== 00:07:58.208 Submission Queue Entry Size 00:07:58.208 Max: 64 00:07:58.208 Min: 64 00:07:58.208 Completion Queue Entry Size 00:07:58.208 Max: 16 00:07:58.208 Min: 16 00:07:58.208 Number of Namespaces: 256 00:07:58.208 Compare Command: Supported 00:07:58.208 Write Uncorrectable Command: Not Supported 00:07:58.208 Dataset Management Command: Supported 00:07:58.208 Write Zeroes Command: Supported 00:07:58.208 Set Features Save Field: Supported 00:07:58.208 Reservations: Not Supported 00:07:58.208 Timestamp: Supported 00:07:58.208 Copy: Supported 00:07:58.208 Volatile Write Cache: Present 00:07:58.208 Atomic Write Unit (Normal): 1 00:07:58.208 Atomic Write Unit (PFail): 1 00:07:58.208 Atomic Compare & Write Unit: 1 00:07:58.208 Fused Compare & Write: Not Supported 00:07:58.208 Scatter-Gather List 00:07:58.208 SGL Command Set: Supported 00:07:58.208 SGL Keyed: Not Supported 00:07:58.208 SGL Bit Bucket Descriptor: Not Supported 00:07:58.208 SGL Metadata Pointer: Not Supported 00:07:58.208 Oversized SGL: Not Supported 00:07:58.208 SGL Metadata Address: Not Supported 00:07:58.208 SGL Offset: Not Supported 00:07:58.208 Transport SGL Data Block: Not Supported 00:07:58.208 Replay Protected Memory Block: Not Supported 00:07:58.208 00:07:58.208 Firmware Slot Information 00:07:58.208 ========================= 00:07:58.208 Active slot: 1 00:07:58.208 Slot 1 Firmware Revision: 1.0 00:07:58.208 00:07:58.208 00:07:58.208 Commands Supported and Effects 00:07:58.208 ============================== 00:07:58.208 Admin Commands 00:07:58.208 -------------- 00:07:58.208 Delete I/O Submission Queue (00h): Supported 00:07:58.208 Create I/O Submission Queue (01h): Supported 00:07:58.208 Get Log Page (02h): Supported 00:07:58.208 Delete I/O Completion Queue (04h): Supported 00:07:58.208 Create I/O Completion Queue (05h): Supported 00:07:58.208 Identify (06h): Supported 00:07:58.208 Abort (08h): Supported 00:07:58.208 Set Features (09h): Supported 00:07:58.208 Get Features (0Ah): Supported 00:07:58.208 Asynchronous Event Request (0Ch): Supported 00:07:58.208 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.208 Directive Send (19h): Supported 00:07:58.208 Directive Receive (1Ah): Supported 00:07:58.208 Virtualization Management (1Ch): Supported 00:07:58.208 Doorbell Buffer Config (7Ch): Supported 00:07:58.208 Format NVM (80h): Supported LBA-Change 00:07:58.208 I/O Commands 00:07:58.208 ------------ 00:07:58.208 Flush (00h): Supported LBA-Change 00:07:58.208 Write (01h): Supported LBA-Change 00:07:58.208 Read (02h): Supported 00:07:58.208 Compare (05h): Supported 00:07:58.208 Write Zeroes (08h): Supported LBA-Change 00:07:58.208 Dataset Management (09h): Supported LBA-Change 00:07:58.208 Unknown (0Ch): Supported 00:07:58.208 Unknown (12h): Supported 00:07:58.208 Copy (19h): Supported LBA-Change 00:07:58.208 Unknown (1Dh): Supported LBA-Change 00:07:58.208 00:07:58.208 Error Log 00:07:58.208 ========= 00:07:58.208 00:07:58.208 Arbitration 00:07:58.208 =========== 00:07:58.208 Arbitration Burst: no limit 00:07:58.208 00:07:58.208 Power Management 00:07:58.208 ================ 00:07:58.208 Number of Power States: 1 00:07:58.208 Current Power State: Power State #0 00:07:58.208 Power State #0: 00:07:58.208 Max Power: 25.00 W 00:07:58.208 Non-Operational State: Operational 00:07:58.208 Entry Latency: 16 microseconds 00:07:58.208 Exit Latency: 4 microseconds 00:07:58.208 Relative Read Throughput: 0 00:07:58.208 Relative Read Latency: 0 00:07:58.208 Relative Write Throughput: 0 00:07:58.208 Relative Write Latency: 0 00:07:58.208 Idle Power: Not Reported 00:07:58.208 Active Power: Not Reported 00:07:58.208 Non-Operational Permissive Mode: Not Supported 00:07:58.208 00:07:58.208 Health Information 00:07:58.208 ================== 00:07:58.208 Critical Warnings: 00:07:58.208 Available Spare Space: OK 00:07:58.208 Temperature: OK 00:07:58.208 Device Reliability: OK 00:07:58.208 Read Only: No 00:07:58.208 Volatile Memory Backup: OK 00:07:58.208 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.208 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.208 Available Spare: 0% 00:07:58.208 Available Spare Threshold: 0% 00:07:58.208 Life Percentage Used: 0% 00:07:58.208 Data Units Read: 2095 00:07:58.208 Data Units Written: 1882 00:07:58.208 Host Read Commands: 109927 00:07:58.208 Host Write Commands: 108196 00:07:58.208 Controller Busy Time: 0 minutes 00:07:58.208 Power Cycles: 0 00:07:58.208 Power On Hours: 0 hours 00:07:58.208 Unsafe Shutdowns: 0 00:07:58.208 Unrecoverable Media Errors: 0 00:07:58.208 Lifetime Error Log Entries: 0 00:07:58.208 Warning Temperature Time: 0 minutes 00:07:58.208 Critical Temperature Time: 0 minutes 00:07:58.208 00:07:58.208 Number of Queues 00:07:58.208 ================ 00:07:58.208 Number of I/O Submission Queues: 64 00:07:58.208 Number of I/O Completion Queues: 64 00:07:58.208 00:07:58.208 ZNS Specific Controller Data 00:07:58.208 ============================ 00:07:58.208 Zone Append Size Limit: 0 00:07:58.208 00:07:58.208 00:07:58.208 Active Namespaces 00:07:58.208 ================= 00:07:58.208 Namespace ID:1 00:07:58.208 Error Recovery Timeout: Unlimited 00:07:58.208 Command Set Identifier: NVM (00h) 00:07:58.208 Deallocate: Supported 00:07:58.208 Deallocated/Unwritten Error: Supported 00:07:58.208 Deallocated Read Value: All 0x00 00:07:58.208 Deallocate in Write Zeroes: Not Supported 00:07:58.208 Deallocated Guard Field: 0xFFFF 00:07:58.208 Flush: Supported 00:07:58.208 Reservation: Not Supported 00:07:58.208 Namespace Sharing Capabilities: Private 00:07:58.208 Size (in LBAs): 1048576 (4GiB) 00:07:58.208 Capacity (in LBAs): 1048576 (4GiB) 00:07:58.208 Utilization (in LBAs): 1048576 (4GiB) 00:07:58.208 Thin Provisioning: Not Supported 00:07:58.208 Per-NS Atomic Units: No 00:07:58.208 Maximum Single Source Range Length: 128 00:07:58.208 Maximum Copy Length: 128 00:07:58.208 Maximum Source Range Count: 128 00:07:58.208 NGUID/EUI64 Never Reused: No 00:07:58.208 Namespace Write Protected: No 00:07:58.208 Number of LBA Formats: 8 00:07:58.208 Current LBA Format: LBA Format #04 00:07:58.208 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.208 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.208 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.208 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.208 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.208 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.208 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.208 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.208 00:07:58.208 NVM Specific Namespace Data 00:07:58.208 =========================== 00:07:58.208 Logical Block Storage Tag Mask: 0 00:07:58.208 Protection Information Capabilities: 00:07:58.208 16b Guard Protection Information Storage Tag Support: No 00:07:58.208 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.208 Storage Tag Check Read Support: No 00:07:58.208 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.208 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.208 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.208 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.208 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.208 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.208 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.208 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.208 Namespace ID:2 00:07:58.208 Error Recovery Timeout: Unlimited 00:07:58.208 Command Set Identifier: NVM (00h) 00:07:58.208 Deallocate: Supported 00:07:58.208 Deallocated/Unwritten Error: Supported 00:07:58.208 Deallocated Read Value: All 0x00 00:07:58.208 Deallocate in Write Zeroes: Not Supported 00:07:58.208 Deallocated Guard Field: 0xFFFF 00:07:58.208 Flush: Supported 00:07:58.208 Reservation: Not Supported 00:07:58.208 Namespace Sharing Capabilities: Private 00:07:58.208 Size (in LBAs): 1048576 (4GiB) 00:07:58.208 Capacity (in LBAs): 1048576 (4GiB) 00:07:58.208 Utilization (in LBAs): 1048576 (4GiB) 00:07:58.208 Thin Provisioning: Not Supported 00:07:58.208 Per-NS Atomic Units: No 00:07:58.209 Maximum Single Source Range Length: 128 00:07:58.209 Maximum Copy Length: 128 00:07:58.209 Maximum Source Range Count: 128 00:07:58.209 NGUID/EUI64 Never Reused: No 00:07:58.209 Namespace Write Protected: No 00:07:58.209 Number of LBA Formats: 8 00:07:58.209 Current LBA Format: LBA Format #04 00:07:58.209 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.209 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.209 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.209 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.209 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.209 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.209 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.209 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.209 00:07:58.209 NVM Specific Namespace Data 00:07:58.209 =========================== 00:07:58.209 Logical Block Storage Tag Mask: 0 00:07:58.209 Protection Information Capabilities: 00:07:58.209 16b Guard Protection Information Storage Tag Support: No 00:07:58.209 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.209 Storage Tag Check Read Support: No 00:07:58.209 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Namespace ID:3 00:07:58.209 Error Recovery Timeout: Unlimited 00:07:58.209 Command Set Identifier: NVM (00h) 00:07:58.209 Deallocate: Supported 00:07:58.209 Deallocated/Unwritten Error: Supported 00:07:58.209 Deallocated Read Value: All 0x00 00:07:58.209 Deallocate in Write Zeroes: Not Supported 00:07:58.209 Deallocated Guard Field: 0xFFFF 00:07:58.209 Flush: Supported 00:07:58.209 Reservation: Not Supported 00:07:58.209 Namespace Sharing Capabilities: Private 00:07:58.209 Size (in LBAs): 1048576 (4GiB) 00:07:58.209 Capacity (in LBAs): 1048576 (4GiB) 00:07:58.209 Utilization (in LBAs): 1048576 (4GiB) 00:07:58.209 Thin Provisioning: Not Supported 00:07:58.209 Per-NS Atomic Units: No 00:07:58.209 Maximum Single Source Range Length: 128 00:07:58.209 Maximum Copy Length: 128 00:07:58.209 Maximum Source Range Count: 128 00:07:58.209 NGUID/EUI64 Never Reused: No 00:07:58.209 Namespace Write Protected: No 00:07:58.209 Number of LBA Formats: 8 00:07:58.209 Current LBA Format: LBA Format #04 00:07:58.209 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.209 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.209 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.209 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.209 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.209 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.209 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.209 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.209 00:07:58.209 NVM Specific Namespace Data 00:07:58.209 =========================== 00:07:58.209 Logical Block Storage Tag Mask: 0 00:07:58.209 Protection Information Capabilities: 00:07:58.209 16b Guard Protection Information Storage Tag Support: No 00:07:58.209 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.209 Storage Tag Check Read Support: No 00:07:58.209 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.209 00:53:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:58.209 00:53:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:58.470 ===================================================== 00:07:58.470 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:58.470 ===================================================== 00:07:58.470 Controller Capabilities/Features 00:07:58.470 ================================ 00:07:58.470 Vendor ID: 1b36 00:07:58.470 Subsystem Vendor ID: 1af4 00:07:58.470 Serial Number: 12340 00:07:58.470 Model Number: QEMU NVMe Ctrl 00:07:58.470 Firmware Version: 8.0.0 00:07:58.470 Recommended Arb Burst: 6 00:07:58.470 IEEE OUI Identifier: 00 54 52 00:07:58.470 Multi-path I/O 00:07:58.470 May have multiple subsystem ports: No 00:07:58.470 May have multiple controllers: No 00:07:58.470 Associated with SR-IOV VF: No 00:07:58.470 Max Data Transfer Size: 524288 00:07:58.470 Max Number of Namespaces: 256 00:07:58.470 Max Number of I/O Queues: 64 00:07:58.470 NVMe Specification Version (VS): 1.4 00:07:58.470 NVMe Specification Version (Identify): 1.4 00:07:58.470 Maximum Queue Entries: 2048 00:07:58.470 Contiguous Queues Required: Yes 00:07:58.470 Arbitration Mechanisms Supported 00:07:58.470 Weighted Round Robin: Not Supported 00:07:58.470 Vendor Specific: Not Supported 00:07:58.470 Reset Timeout: 7500 ms 00:07:58.470 Doorbell Stride: 4 bytes 00:07:58.470 NVM Subsystem Reset: Not Supported 00:07:58.470 Command Sets Supported 00:07:58.470 NVM Command Set: Supported 00:07:58.470 Boot Partition: Not Supported 00:07:58.470 Memory Page Size Minimum: 4096 bytes 00:07:58.470 Memory Page Size Maximum: 65536 bytes 00:07:58.470 Persistent Memory Region: Not Supported 00:07:58.470 Optional Asynchronous Events Supported 00:07:58.470 Namespace Attribute Notices: Supported 00:07:58.470 Firmware Activation Notices: Not Supported 00:07:58.470 ANA Change Notices: Not Supported 00:07:58.470 PLE Aggregate Log Change Notices: Not Supported 00:07:58.470 LBA Status Info Alert Notices: Not Supported 00:07:58.470 EGE Aggregate Log Change Notices: Not Supported 00:07:58.470 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.470 Zone Descriptor Change Notices: Not Supported 00:07:58.470 Discovery Log Change Notices: Not Supported 00:07:58.470 Controller Attributes 00:07:58.470 128-bit Host Identifier: Not Supported 00:07:58.470 Non-Operational Permissive Mode: Not Supported 00:07:58.470 NVM Sets: Not Supported 00:07:58.470 Read Recovery Levels: Not Supported 00:07:58.470 Endurance Groups: Not Supported 00:07:58.470 Predictable Latency Mode: Not Supported 00:07:58.470 Traffic Based Keep ALive: Not Supported 00:07:58.470 Namespace Granularity: Not Supported 00:07:58.470 SQ Associations: Not Supported 00:07:58.470 UUID List: Not Supported 00:07:58.470 Multi-Domain Subsystem: Not Supported 00:07:58.470 Fixed Capacity Management: Not Supported 00:07:58.470 Variable Capacity Management: Not Supported 00:07:58.470 Delete Endurance Group: Not Supported 00:07:58.470 Delete NVM Set: Not Supported 00:07:58.470 Extended LBA Formats Supported: Supported 00:07:58.470 Flexible Data Placement Supported: Not Supported 00:07:58.470 00:07:58.470 Controller Memory Buffer Support 00:07:58.470 ================================ 00:07:58.470 Supported: No 00:07:58.470 00:07:58.470 Persistent Memory Region Support 00:07:58.470 ================================ 00:07:58.470 Supported: No 00:07:58.470 00:07:58.470 Admin Command Set Attributes 00:07:58.470 ============================ 00:07:58.470 Security Send/Receive: Not Supported 00:07:58.470 Format NVM: Supported 00:07:58.470 Firmware Activate/Download: Not Supported 00:07:58.470 Namespace Management: Supported 00:07:58.470 Device Self-Test: Not Supported 00:07:58.470 Directives: Supported 00:07:58.470 NVMe-MI: Not Supported 00:07:58.470 Virtualization Management: Not Supported 00:07:58.470 Doorbell Buffer Config: Supported 00:07:58.470 Get LBA Status Capability: Not Supported 00:07:58.470 Command & Feature Lockdown Capability: Not Supported 00:07:58.470 Abort Command Limit: 4 00:07:58.470 Async Event Request Limit: 4 00:07:58.470 Number of Firmware Slots: N/A 00:07:58.470 Firmware Slot 1 Read-Only: N/A 00:07:58.470 Firmware Activation Without Reset: N/A 00:07:58.470 Multiple Update Detection Support: N/A 00:07:58.470 Firmware Update Granularity: No Information Provided 00:07:58.470 Per-Namespace SMART Log: Yes 00:07:58.470 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.470 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:58.470 Command Effects Log Page: Supported 00:07:58.470 Get Log Page Extended Data: Supported 00:07:58.470 Telemetry Log Pages: Not Supported 00:07:58.470 Persistent Event Log Pages: Not Supported 00:07:58.470 Supported Log Pages Log Page: May Support 00:07:58.470 Commands Supported & Effects Log Page: Not Supported 00:07:58.470 Feature Identifiers & Effects Log Page:May Support 00:07:58.470 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.470 Data Area 4 for Telemetry Log: Not Supported 00:07:58.470 Error Log Page Entries Supported: 1 00:07:58.470 Keep Alive: Not Supported 00:07:58.470 00:07:58.470 NVM Command Set Attributes 00:07:58.470 ========================== 00:07:58.470 Submission Queue Entry Size 00:07:58.470 Max: 64 00:07:58.470 Min: 64 00:07:58.470 Completion Queue Entry Size 00:07:58.470 Max: 16 00:07:58.470 Min: 16 00:07:58.470 Number of Namespaces: 256 00:07:58.470 Compare Command: Supported 00:07:58.470 Write Uncorrectable Command: Not Supported 00:07:58.470 Dataset Management Command: Supported 00:07:58.470 Write Zeroes Command: Supported 00:07:58.470 Set Features Save Field: Supported 00:07:58.470 Reservations: Not Supported 00:07:58.470 Timestamp: Supported 00:07:58.470 Copy: Supported 00:07:58.470 Volatile Write Cache: Present 00:07:58.470 Atomic Write Unit (Normal): 1 00:07:58.470 Atomic Write Unit (PFail): 1 00:07:58.470 Atomic Compare & Write Unit: 1 00:07:58.470 Fused Compare & Write: Not Supported 00:07:58.470 Scatter-Gather List 00:07:58.470 SGL Command Set: Supported 00:07:58.470 SGL Keyed: Not Supported 00:07:58.470 SGL Bit Bucket Descriptor: Not Supported 00:07:58.470 SGL Metadata Pointer: Not Supported 00:07:58.470 Oversized SGL: Not Supported 00:07:58.470 SGL Metadata Address: Not Supported 00:07:58.470 SGL Offset: Not Supported 00:07:58.470 Transport SGL Data Block: Not Supported 00:07:58.470 Replay Protected Memory Block: Not Supported 00:07:58.470 00:07:58.470 Firmware Slot Information 00:07:58.470 ========================= 00:07:58.470 Active slot: 1 00:07:58.470 Slot 1 Firmware Revision: 1.0 00:07:58.470 00:07:58.470 00:07:58.470 Commands Supported and Effects 00:07:58.470 ============================== 00:07:58.470 Admin Commands 00:07:58.470 -------------- 00:07:58.470 Delete I/O Submission Queue (00h): Supported 00:07:58.470 Create I/O Submission Queue (01h): Supported 00:07:58.470 Get Log Page (02h): Supported 00:07:58.470 Delete I/O Completion Queue (04h): Supported 00:07:58.470 Create I/O Completion Queue (05h): Supported 00:07:58.470 Identify (06h): Supported 00:07:58.470 Abort (08h): Supported 00:07:58.470 Set Features (09h): Supported 00:07:58.470 Get Features (0Ah): Supported 00:07:58.470 Asynchronous Event Request (0Ch): Supported 00:07:58.470 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.470 Directive Send (19h): Supported 00:07:58.471 Directive Receive (1Ah): Supported 00:07:58.471 Virtualization Management (1Ch): Supported 00:07:58.471 Doorbell Buffer Config (7Ch): Supported 00:07:58.471 Format NVM (80h): Supported LBA-Change 00:07:58.471 I/O Commands 00:07:58.471 ------------ 00:07:58.471 Flush (00h): Supported LBA-Change 00:07:58.471 Write (01h): Supported LBA-Change 00:07:58.471 Read (02h): Supported 00:07:58.471 Compare (05h): Supported 00:07:58.471 Write Zeroes (08h): Supported LBA-Change 00:07:58.471 Dataset Management (09h): Supported LBA-Change 00:07:58.471 Unknown (0Ch): Supported 00:07:58.471 Unknown (12h): Supported 00:07:58.471 Copy (19h): Supported LBA-Change 00:07:58.471 Unknown (1Dh): Supported LBA-Change 00:07:58.471 00:07:58.471 Error Log 00:07:58.471 ========= 00:07:58.471 00:07:58.471 Arbitration 00:07:58.471 =========== 00:07:58.471 Arbitration Burst: no limit 00:07:58.471 00:07:58.471 Power Management 00:07:58.471 ================ 00:07:58.471 Number of Power States: 1 00:07:58.471 Current Power State: Power State #0 00:07:58.471 Power State #0: 00:07:58.471 Max Power: 25.00 W 00:07:58.471 Non-Operational State: Operational 00:07:58.471 Entry Latency: 16 microseconds 00:07:58.471 Exit Latency: 4 microseconds 00:07:58.471 Relative Read Throughput: 0 00:07:58.471 Relative Read Latency: 0 00:07:58.471 Relative Write Throughput: 0 00:07:58.471 Relative Write Latency: 0 00:07:58.471 Idle Power: Not Reported 00:07:58.471 Active Power: Not Reported 00:07:58.471 Non-Operational Permissive Mode: Not Supported 00:07:58.471 00:07:58.471 Health Information 00:07:58.471 ================== 00:07:58.471 Critical Warnings: 00:07:58.471 Available Spare Space: OK 00:07:58.471 Temperature: OK 00:07:58.471 Device Reliability: OK 00:07:58.471 Read Only: No 00:07:58.471 Volatile Memory Backup: OK 00:07:58.471 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.471 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.471 Available Spare: 0% 00:07:58.471 Available Spare Threshold: 0% 00:07:58.471 Life Percentage Used: 0% 00:07:58.471 Data Units Read: 671 00:07:58.471 Data Units Written: 599 00:07:58.471 Host Read Commands: 36278 00:07:58.471 Host Write Commands: 36064 00:07:58.471 Controller Busy Time: 0 minutes 00:07:58.471 Power Cycles: 0 00:07:58.471 Power On Hours: 0 hours 00:07:58.471 Unsafe Shutdowns: 0 00:07:58.471 Unrecoverable Media Errors: 0 00:07:58.471 Lifetime Error Log Entries: 0 00:07:58.471 Warning Temperature Time: 0 minutes 00:07:58.471 Critical Temperature Time: 0 minutes 00:07:58.471 00:07:58.471 Number of Queues 00:07:58.471 ================ 00:07:58.471 Number of I/O Submission Queues: 64 00:07:58.471 Number of I/O Completion Queues: 64 00:07:58.471 00:07:58.471 ZNS Specific Controller Data 00:07:58.471 ============================ 00:07:58.471 Zone Append Size Limit: 0 00:07:58.471 00:07:58.471 00:07:58.471 Active Namespaces 00:07:58.471 ================= 00:07:58.471 Namespace ID:1 00:07:58.471 Error Recovery Timeout: Unlimited 00:07:58.471 Command Set Identifier: NVM (00h) 00:07:58.471 Deallocate: Supported 00:07:58.471 Deallocated/Unwritten Error: Supported 00:07:58.471 Deallocated Read Value: All 0x00 00:07:58.471 Deallocate in Write Zeroes: Not Supported 00:07:58.471 Deallocated Guard Field: 0xFFFF 00:07:58.471 Flush: Supported 00:07:58.471 Reservation: Not Supported 00:07:58.471 Metadata Transferred as: Separate Metadata Buffer 00:07:58.471 Namespace Sharing Capabilities: Private 00:07:58.471 Size (in LBAs): 1548666 (5GiB) 00:07:58.471 Capacity (in LBAs): 1548666 (5GiB) 00:07:58.471 Utilization (in LBAs): 1548666 (5GiB) 00:07:58.471 Thin Provisioning: Not Supported 00:07:58.471 Per-NS Atomic Units: No 00:07:58.471 Maximum Single Source Range Length: 128 00:07:58.471 Maximum Copy Length: 128 00:07:58.471 Maximum Source Range Count: 128 00:07:58.471 NGUID/EUI64 Never Reused: No 00:07:58.471 Namespace Write Protected: No 00:07:58.471 Number of LBA Formats: 8 00:07:58.471 Current LBA Format: LBA Format #07 00:07:58.471 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.471 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.471 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.471 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.471 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.471 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.471 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.471 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.471 00:07:58.471 NVM Specific Namespace Data 00:07:58.471 =========================== 00:07:58.471 Logical Block Storage Tag Mask: 0 00:07:58.471 Protection Information Capabilities: 00:07:58.471 16b Guard Protection Information Storage Tag Support: No 00:07:58.471 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.471 Storage Tag Check Read Support: No 00:07:58.471 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.471 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.471 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.471 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.471 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.471 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.471 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.471 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.471 00:53:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:58.471 00:53:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:58.730 ===================================================== 00:07:58.730 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:58.730 ===================================================== 00:07:58.730 Controller Capabilities/Features 00:07:58.730 ================================ 00:07:58.730 Vendor ID: 1b36 00:07:58.730 Subsystem Vendor ID: 1af4 00:07:58.730 Serial Number: 12341 00:07:58.730 Model Number: QEMU NVMe Ctrl 00:07:58.730 Firmware Version: 8.0.0 00:07:58.730 Recommended Arb Burst: 6 00:07:58.730 IEEE OUI Identifier: 00 54 52 00:07:58.730 Multi-path I/O 00:07:58.730 May have multiple subsystem ports: No 00:07:58.730 May have multiple controllers: No 00:07:58.730 Associated with SR-IOV VF: No 00:07:58.730 Max Data Transfer Size: 524288 00:07:58.730 Max Number of Namespaces: 256 00:07:58.730 Max Number of I/O Queues: 64 00:07:58.730 NVMe Specification Version (VS): 1.4 00:07:58.730 NVMe Specification Version (Identify): 1.4 00:07:58.730 Maximum Queue Entries: 2048 00:07:58.730 Contiguous Queues Required: Yes 00:07:58.730 Arbitration Mechanisms Supported 00:07:58.731 Weighted Round Robin: Not Supported 00:07:58.731 Vendor Specific: Not Supported 00:07:58.731 Reset Timeout: 7500 ms 00:07:58.731 Doorbell Stride: 4 bytes 00:07:58.731 NVM Subsystem Reset: Not Supported 00:07:58.731 Command Sets Supported 00:07:58.731 NVM Command Set: Supported 00:07:58.731 Boot Partition: Not Supported 00:07:58.731 Memory Page Size Minimum: 4096 bytes 00:07:58.731 Memory Page Size Maximum: 65536 bytes 00:07:58.731 Persistent Memory Region: Not Supported 00:07:58.731 Optional Asynchronous Events Supported 00:07:58.731 Namespace Attribute Notices: Supported 00:07:58.731 Firmware Activation Notices: Not Supported 00:07:58.731 ANA Change Notices: Not Supported 00:07:58.731 PLE Aggregate Log Change Notices: Not Supported 00:07:58.731 LBA Status Info Alert Notices: Not Supported 00:07:58.731 EGE Aggregate Log Change Notices: Not Supported 00:07:58.731 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.731 Zone Descriptor Change Notices: Not Supported 00:07:58.731 Discovery Log Change Notices: Not Supported 00:07:58.731 Controller Attributes 00:07:58.731 128-bit Host Identifier: Not Supported 00:07:58.731 Non-Operational Permissive Mode: Not Supported 00:07:58.731 NVM Sets: Not Supported 00:07:58.731 Read Recovery Levels: Not Supported 00:07:58.731 Endurance Groups: Not Supported 00:07:58.731 Predictable Latency Mode: Not Supported 00:07:58.731 Traffic Based Keep ALive: Not Supported 00:07:58.731 Namespace Granularity: Not Supported 00:07:58.731 SQ Associations: Not Supported 00:07:58.731 UUID List: Not Supported 00:07:58.731 Multi-Domain Subsystem: Not Supported 00:07:58.731 Fixed Capacity Management: Not Supported 00:07:58.731 Variable Capacity Management: Not Supported 00:07:58.731 Delete Endurance Group: Not Supported 00:07:58.731 Delete NVM Set: Not Supported 00:07:58.731 Extended LBA Formats Supported: Supported 00:07:58.731 Flexible Data Placement Supported: Not Supported 00:07:58.731 00:07:58.731 Controller Memory Buffer Support 00:07:58.731 ================================ 00:07:58.731 Supported: No 00:07:58.731 00:07:58.731 Persistent Memory Region Support 00:07:58.731 ================================ 00:07:58.731 Supported: No 00:07:58.731 00:07:58.731 Admin Command Set Attributes 00:07:58.731 ============================ 00:07:58.731 Security Send/Receive: Not Supported 00:07:58.731 Format NVM: Supported 00:07:58.731 Firmware Activate/Download: Not Supported 00:07:58.731 Namespace Management: Supported 00:07:58.731 Device Self-Test: Not Supported 00:07:58.731 Directives: Supported 00:07:58.731 NVMe-MI: Not Supported 00:07:58.731 Virtualization Management: Not Supported 00:07:58.731 Doorbell Buffer Config: Supported 00:07:58.731 Get LBA Status Capability: Not Supported 00:07:58.731 Command & Feature Lockdown Capability: Not Supported 00:07:58.731 Abort Command Limit: 4 00:07:58.731 Async Event Request Limit: 4 00:07:58.731 Number of Firmware Slots: N/A 00:07:58.731 Firmware Slot 1 Read-Only: N/A 00:07:58.731 Firmware Activation Without Reset: N/A 00:07:58.731 Multiple Update Detection Support: N/A 00:07:58.731 Firmware Update Granularity: No Information Provided 00:07:58.731 Per-Namespace SMART Log: Yes 00:07:58.731 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.731 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:58.731 Command Effects Log Page: Supported 00:07:58.731 Get Log Page Extended Data: Supported 00:07:58.731 Telemetry Log Pages: Not Supported 00:07:58.731 Persistent Event Log Pages: Not Supported 00:07:58.731 Supported Log Pages Log Page: May Support 00:07:58.731 Commands Supported & Effects Log Page: Not Supported 00:07:58.731 Feature Identifiers & Effects Log Page:May Support 00:07:58.731 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.731 Data Area 4 for Telemetry Log: Not Supported 00:07:58.731 Error Log Page Entries Supported: 1 00:07:58.731 Keep Alive: Not Supported 00:07:58.731 00:07:58.731 NVM Command Set Attributes 00:07:58.731 ========================== 00:07:58.731 Submission Queue Entry Size 00:07:58.731 Max: 64 00:07:58.731 Min: 64 00:07:58.731 Completion Queue Entry Size 00:07:58.731 Max: 16 00:07:58.731 Min: 16 00:07:58.731 Number of Namespaces: 256 00:07:58.731 Compare Command: Supported 00:07:58.731 Write Uncorrectable Command: Not Supported 00:07:58.731 Dataset Management Command: Supported 00:07:58.731 Write Zeroes Command: Supported 00:07:58.731 Set Features Save Field: Supported 00:07:58.731 Reservations: Not Supported 00:07:58.731 Timestamp: Supported 00:07:58.731 Copy: Supported 00:07:58.731 Volatile Write Cache: Present 00:07:58.731 Atomic Write Unit (Normal): 1 00:07:58.731 Atomic Write Unit (PFail): 1 00:07:58.731 Atomic Compare & Write Unit: 1 00:07:58.731 Fused Compare & Write: Not Supported 00:07:58.731 Scatter-Gather List 00:07:58.731 SGL Command Set: Supported 00:07:58.731 SGL Keyed: Not Supported 00:07:58.731 SGL Bit Bucket Descriptor: Not Supported 00:07:58.731 SGL Metadata Pointer: Not Supported 00:07:58.731 Oversized SGL: Not Supported 00:07:58.731 SGL Metadata Address: Not Supported 00:07:58.731 SGL Offset: Not Supported 00:07:58.731 Transport SGL Data Block: Not Supported 00:07:58.731 Replay Protected Memory Block: Not Supported 00:07:58.731 00:07:58.731 Firmware Slot Information 00:07:58.731 ========================= 00:07:58.731 Active slot: 1 00:07:58.731 Slot 1 Firmware Revision: 1.0 00:07:58.731 00:07:58.731 00:07:58.731 Commands Supported and Effects 00:07:58.731 ============================== 00:07:58.731 Admin Commands 00:07:58.731 -------------- 00:07:58.731 Delete I/O Submission Queue (00h): Supported 00:07:58.731 Create I/O Submission Queue (01h): Supported 00:07:58.731 Get Log Page (02h): Supported 00:07:58.731 Delete I/O Completion Queue (04h): Supported 00:07:58.731 Create I/O Completion Queue (05h): Supported 00:07:58.731 Identify (06h): Supported 00:07:58.731 Abort (08h): Supported 00:07:58.731 Set Features (09h): Supported 00:07:58.731 Get Features (0Ah): Supported 00:07:58.731 Asynchronous Event Request (0Ch): Supported 00:07:58.731 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.731 Directive Send (19h): Supported 00:07:58.731 Directive Receive (1Ah): Supported 00:07:58.731 Virtualization Management (1Ch): Supported 00:07:58.731 Doorbell Buffer Config (7Ch): Supported 00:07:58.731 Format NVM (80h): Supported LBA-Change 00:07:58.731 I/O Commands 00:07:58.731 ------------ 00:07:58.731 Flush (00h): Supported LBA-Change 00:07:58.731 Write (01h): Supported LBA-Change 00:07:58.731 Read (02h): Supported 00:07:58.731 Compare (05h): Supported 00:07:58.731 Write Zeroes (08h): Supported LBA-Change 00:07:58.731 Dataset Management (09h): Supported LBA-Change 00:07:58.731 Unknown (0Ch): Supported 00:07:58.731 Unknown (12h): Supported 00:07:58.731 Copy (19h): Supported LBA-Change 00:07:58.731 Unknown (1Dh): Supported LBA-Change 00:07:58.731 00:07:58.731 Error Log 00:07:58.731 ========= 00:07:58.731 00:07:58.731 Arbitration 00:07:58.731 =========== 00:07:58.731 Arbitration Burst: no limit 00:07:58.731 00:07:58.731 Power Management 00:07:58.731 ================ 00:07:58.731 Number of Power States: 1 00:07:58.731 Current Power State: Power State #0 00:07:58.731 Power State #0: 00:07:58.731 Max Power: 25.00 W 00:07:58.731 Non-Operational State: Operational 00:07:58.731 Entry Latency: 16 microseconds 00:07:58.731 Exit Latency: 4 microseconds 00:07:58.731 Relative Read Throughput: 0 00:07:58.731 Relative Read Latency: 0 00:07:58.731 Relative Write Throughput: 0 00:07:58.731 Relative Write Latency: 0 00:07:58.731 Idle Power: Not Reported 00:07:58.731 Active Power: Not Reported 00:07:58.731 Non-Operational Permissive Mode: Not Supported 00:07:58.731 00:07:58.731 Health Information 00:07:58.731 ================== 00:07:58.731 Critical Warnings: 00:07:58.731 Available Spare Space: OK 00:07:58.731 Temperature: OK 00:07:58.731 Device Reliability: OK 00:07:58.731 Read Only: No 00:07:58.731 Volatile Memory Backup: OK 00:07:58.731 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.731 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.731 Available Spare: 0% 00:07:58.731 Available Spare Threshold: 0% 00:07:58.731 Life Percentage Used: 0% 00:07:58.731 Data Units Read: 1032 00:07:58.731 Data Units Written: 893 00:07:58.732 Host Read Commands: 53139 00:07:58.732 Host Write Commands: 51816 00:07:58.732 Controller Busy Time: 0 minutes 00:07:58.732 Power Cycles: 0 00:07:58.732 Power On Hours: 0 hours 00:07:58.732 Unsafe Shutdowns: 0 00:07:58.732 Unrecoverable Media Errors: 0 00:07:58.732 Lifetime Error Log Entries: 0 00:07:58.732 Warning Temperature Time: 0 minutes 00:07:58.732 Critical Temperature Time: 0 minutes 00:07:58.732 00:07:58.732 Number of Queues 00:07:58.732 ================ 00:07:58.732 Number of I/O Submission Queues: 64 00:07:58.732 Number of I/O Completion Queues: 64 00:07:58.732 00:07:58.732 ZNS Specific Controller Data 00:07:58.732 ============================ 00:07:58.732 Zone Append Size Limit: 0 00:07:58.732 00:07:58.732 00:07:58.732 Active Namespaces 00:07:58.732 ================= 00:07:58.732 Namespace ID:1 00:07:58.732 Error Recovery Timeout: Unlimited 00:07:58.732 Command Set Identifier: NVM (00h) 00:07:58.732 Deallocate: Supported 00:07:58.732 Deallocated/Unwritten Error: Supported 00:07:58.732 Deallocated Read Value: All 0x00 00:07:58.732 Deallocate in Write Zeroes: Not Supported 00:07:58.732 Deallocated Guard Field: 0xFFFF 00:07:58.732 Flush: Supported 00:07:58.732 Reservation: Not Supported 00:07:58.732 Namespace Sharing Capabilities: Private 00:07:58.732 Size (in LBAs): 1310720 (5GiB) 00:07:58.732 Capacity (in LBAs): 1310720 (5GiB) 00:07:58.732 Utilization (in LBAs): 1310720 (5GiB) 00:07:58.732 Thin Provisioning: Not Supported 00:07:58.732 Per-NS Atomic Units: No 00:07:58.732 Maximum Single Source Range Length: 128 00:07:58.732 Maximum Copy Length: 128 00:07:58.732 Maximum Source Range Count: 128 00:07:58.732 NGUID/EUI64 Never Reused: No 00:07:58.732 Namespace Write Protected: No 00:07:58.732 Number of LBA Formats: 8 00:07:58.732 Current LBA Format: LBA Format #04 00:07:58.732 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.732 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.732 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.732 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.732 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.732 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.732 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.732 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.732 00:07:58.732 NVM Specific Namespace Data 00:07:58.732 =========================== 00:07:58.732 Logical Block Storage Tag Mask: 0 00:07:58.732 Protection Information Capabilities: 00:07:58.732 16b Guard Protection Information Storage Tag Support: No 00:07:58.732 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.732 Storage Tag Check Read Support: No 00:07:58.732 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.732 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.732 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.732 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.732 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.732 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.732 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.732 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.732 00:53:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:58.732 00:53:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:58.993 ===================================================== 00:07:58.993 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:58.993 ===================================================== 00:07:58.993 Controller Capabilities/Features 00:07:58.993 ================================ 00:07:58.993 Vendor ID: 1b36 00:07:58.993 Subsystem Vendor ID: 1af4 00:07:58.993 Serial Number: 12342 00:07:58.993 Model Number: QEMU NVMe Ctrl 00:07:58.993 Firmware Version: 8.0.0 00:07:58.993 Recommended Arb Burst: 6 00:07:58.993 IEEE OUI Identifier: 00 54 52 00:07:58.993 Multi-path I/O 00:07:58.993 May have multiple subsystem ports: No 00:07:58.993 May have multiple controllers: No 00:07:58.993 Associated with SR-IOV VF: No 00:07:58.993 Max Data Transfer Size: 524288 00:07:58.993 Max Number of Namespaces: 256 00:07:58.993 Max Number of I/O Queues: 64 00:07:58.993 NVMe Specification Version (VS): 1.4 00:07:58.993 NVMe Specification Version (Identify): 1.4 00:07:58.993 Maximum Queue Entries: 2048 00:07:58.993 Contiguous Queues Required: Yes 00:07:58.993 Arbitration Mechanisms Supported 00:07:58.993 Weighted Round Robin: Not Supported 00:07:58.993 Vendor Specific: Not Supported 00:07:58.993 Reset Timeout: 7500 ms 00:07:58.993 Doorbell Stride: 4 bytes 00:07:58.993 NVM Subsystem Reset: Not Supported 00:07:58.993 Command Sets Supported 00:07:58.993 NVM Command Set: Supported 00:07:58.993 Boot Partition: Not Supported 00:07:58.993 Memory Page Size Minimum: 4096 bytes 00:07:58.993 Memory Page Size Maximum: 65536 bytes 00:07:58.993 Persistent Memory Region: Not Supported 00:07:58.993 Optional Asynchronous Events Supported 00:07:58.993 Namespace Attribute Notices: Supported 00:07:58.993 Firmware Activation Notices: Not Supported 00:07:58.993 ANA Change Notices: Not Supported 00:07:58.993 PLE Aggregate Log Change Notices: Not Supported 00:07:58.993 LBA Status Info Alert Notices: Not Supported 00:07:58.993 EGE Aggregate Log Change Notices: Not Supported 00:07:58.993 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.993 Zone Descriptor Change Notices: Not Supported 00:07:58.993 Discovery Log Change Notices: Not Supported 00:07:58.993 Controller Attributes 00:07:58.993 128-bit Host Identifier: Not Supported 00:07:58.993 Non-Operational Permissive Mode: Not Supported 00:07:58.993 NVM Sets: Not Supported 00:07:58.993 Read Recovery Levels: Not Supported 00:07:58.993 Endurance Groups: Not Supported 00:07:58.993 Predictable Latency Mode: Not Supported 00:07:58.993 Traffic Based Keep ALive: Not Supported 00:07:58.993 Namespace Granularity: Not Supported 00:07:58.993 SQ Associations: Not Supported 00:07:58.993 UUID List: Not Supported 00:07:58.993 Multi-Domain Subsystem: Not Supported 00:07:58.993 Fixed Capacity Management: Not Supported 00:07:58.993 Variable Capacity Management: Not Supported 00:07:58.993 Delete Endurance Group: Not Supported 00:07:58.993 Delete NVM Set: Not Supported 00:07:58.993 Extended LBA Formats Supported: Supported 00:07:58.993 Flexible Data Placement Supported: Not Supported 00:07:58.993 00:07:58.993 Controller Memory Buffer Support 00:07:58.993 ================================ 00:07:58.993 Supported: No 00:07:58.993 00:07:58.993 Persistent Memory Region Support 00:07:58.993 ================================ 00:07:58.993 Supported: No 00:07:58.993 00:07:58.993 Admin Command Set Attributes 00:07:58.993 ============================ 00:07:58.993 Security Send/Receive: Not Supported 00:07:58.993 Format NVM: Supported 00:07:58.993 Firmware Activate/Download: Not Supported 00:07:58.993 Namespace Management: Supported 00:07:58.993 Device Self-Test: Not Supported 00:07:58.993 Directives: Supported 00:07:58.993 NVMe-MI: Not Supported 00:07:58.993 Virtualization Management: Not Supported 00:07:58.993 Doorbell Buffer Config: Supported 00:07:58.993 Get LBA Status Capability: Not Supported 00:07:58.993 Command & Feature Lockdown Capability: Not Supported 00:07:58.993 Abort Command Limit: 4 00:07:58.993 Async Event Request Limit: 4 00:07:58.993 Number of Firmware Slots: N/A 00:07:58.993 Firmware Slot 1 Read-Only: N/A 00:07:58.993 Firmware Activation Without Reset: N/A 00:07:58.993 Multiple Update Detection Support: N/A 00:07:58.993 Firmware Update Granularity: No Information Provided 00:07:58.993 Per-Namespace SMART Log: Yes 00:07:58.993 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.993 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:58.993 Command Effects Log Page: Supported 00:07:58.993 Get Log Page Extended Data: Supported 00:07:58.993 Telemetry Log Pages: Not Supported 00:07:58.993 Persistent Event Log Pages: Not Supported 00:07:58.993 Supported Log Pages Log Page: May Support 00:07:58.993 Commands Supported & Effects Log Page: Not Supported 00:07:58.993 Feature Identifiers & Effects Log Page:May Support 00:07:58.993 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.993 Data Area 4 for Telemetry Log: Not Supported 00:07:58.993 Error Log Page Entries Supported: 1 00:07:58.993 Keep Alive: Not Supported 00:07:58.993 00:07:58.993 NVM Command Set Attributes 00:07:58.993 ========================== 00:07:58.993 Submission Queue Entry Size 00:07:58.993 Max: 64 00:07:58.993 Min: 64 00:07:58.993 Completion Queue Entry Size 00:07:58.993 Max: 16 00:07:58.993 Min: 16 00:07:58.993 Number of Namespaces: 256 00:07:58.993 Compare Command: Supported 00:07:58.993 Write Uncorrectable Command: Not Supported 00:07:58.993 Dataset Management Command: Supported 00:07:58.993 Write Zeroes Command: Supported 00:07:58.993 Set Features Save Field: Supported 00:07:58.993 Reservations: Not Supported 00:07:58.993 Timestamp: Supported 00:07:58.993 Copy: Supported 00:07:58.993 Volatile Write Cache: Present 00:07:58.993 Atomic Write Unit (Normal): 1 00:07:58.993 Atomic Write Unit (PFail): 1 00:07:58.993 Atomic Compare & Write Unit: 1 00:07:58.993 Fused Compare & Write: Not Supported 00:07:58.993 Scatter-Gather List 00:07:58.993 SGL Command Set: Supported 00:07:58.993 SGL Keyed: Not Supported 00:07:58.993 SGL Bit Bucket Descriptor: Not Supported 00:07:58.993 SGL Metadata Pointer: Not Supported 00:07:58.993 Oversized SGL: Not Supported 00:07:58.993 SGL Metadata Address: Not Supported 00:07:58.993 SGL Offset: Not Supported 00:07:58.993 Transport SGL Data Block: Not Supported 00:07:58.993 Replay Protected Memory Block: Not Supported 00:07:58.993 00:07:58.993 Firmware Slot Information 00:07:58.993 ========================= 00:07:58.993 Active slot: 1 00:07:58.993 Slot 1 Firmware Revision: 1.0 00:07:58.993 00:07:58.993 00:07:58.993 Commands Supported and Effects 00:07:58.993 ============================== 00:07:58.993 Admin Commands 00:07:58.993 -------------- 00:07:58.993 Delete I/O Submission Queue (00h): Supported 00:07:58.993 Create I/O Submission Queue (01h): Supported 00:07:58.993 Get Log Page (02h): Supported 00:07:58.993 Delete I/O Completion Queue (04h): Supported 00:07:58.993 Create I/O Completion Queue (05h): Supported 00:07:58.993 Identify (06h): Supported 00:07:58.993 Abort (08h): Supported 00:07:58.993 Set Features (09h): Supported 00:07:58.993 Get Features (0Ah): Supported 00:07:58.993 Asynchronous Event Request (0Ch): Supported 00:07:58.993 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.993 Directive Send (19h): Supported 00:07:58.993 Directive Receive (1Ah): Supported 00:07:58.993 Virtualization Management (1Ch): Supported 00:07:58.993 Doorbell Buffer Config (7Ch): Supported 00:07:58.993 Format NVM (80h): Supported LBA-Change 00:07:58.993 I/O Commands 00:07:58.993 ------------ 00:07:58.993 Flush (00h): Supported LBA-Change 00:07:58.993 Write (01h): Supported LBA-Change 00:07:58.993 Read (02h): Supported 00:07:58.993 Compare (05h): Supported 00:07:58.993 Write Zeroes (08h): Supported LBA-Change 00:07:58.993 Dataset Management (09h): Supported LBA-Change 00:07:58.993 Unknown (0Ch): Supported 00:07:58.993 Unknown (12h): Supported 00:07:58.993 Copy (19h): Supported LBA-Change 00:07:58.993 Unknown (1Dh): Supported LBA-Change 00:07:58.993 00:07:58.993 Error Log 00:07:58.993 ========= 00:07:58.993 00:07:58.993 Arbitration 00:07:58.993 =========== 00:07:58.993 Arbitration Burst: no limit 00:07:58.993 00:07:58.993 Power Management 00:07:58.993 ================ 00:07:58.993 Number of Power States: 1 00:07:58.993 Current Power State: Power State #0 00:07:58.993 Power State #0: 00:07:58.993 Max Power: 25.00 W 00:07:58.993 Non-Operational State: Operational 00:07:58.993 Entry Latency: 16 microseconds 00:07:58.993 Exit Latency: 4 microseconds 00:07:58.993 Relative Read Throughput: 0 00:07:58.994 Relative Read Latency: 0 00:07:58.994 Relative Write Throughput: 0 00:07:58.994 Relative Write Latency: 0 00:07:58.994 Idle Power: Not Reported 00:07:58.994 Active Power: Not Reported 00:07:58.994 Non-Operational Permissive Mode: Not Supported 00:07:58.994 00:07:58.994 Health Information 00:07:58.994 ================== 00:07:58.994 Critical Warnings: 00:07:58.994 Available Spare Space: OK 00:07:58.994 Temperature: OK 00:07:58.994 Device Reliability: OK 00:07:58.994 Read Only: No 00:07:58.994 Volatile Memory Backup: OK 00:07:58.994 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.994 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.994 Available Spare: 0% 00:07:58.994 Available Spare Threshold: 0% 00:07:58.994 Life Percentage Used: 0% 00:07:58.994 Data Units Read: 2095 00:07:58.994 Data Units Written: 1882 00:07:58.994 Host Read Commands: 109927 00:07:58.994 Host Write Commands: 108196 00:07:58.994 Controller Busy Time: 0 minutes 00:07:58.994 Power Cycles: 0 00:07:58.994 Power On Hours: 0 hours 00:07:58.994 Unsafe Shutdowns: 0 00:07:58.994 Unrecoverable Media Errors: 0 00:07:58.994 Lifetime Error Log Entries: 0 00:07:58.994 Warning Temperature Time: 0 minutes 00:07:58.994 Critical Temperature Time: 0 minutes 00:07:58.994 00:07:58.994 Number of Queues 00:07:58.994 ================ 00:07:58.994 Number of I/O Submission Queues: 64 00:07:58.994 Number of I/O Completion Queues: 64 00:07:58.994 00:07:58.994 ZNS Specific Controller Data 00:07:58.994 ============================ 00:07:58.994 Zone Append Size Limit: 0 00:07:58.994 00:07:58.994 00:07:58.994 Active Namespaces 00:07:58.994 ================= 00:07:58.994 Namespace ID:1 00:07:58.994 Error Recovery Timeout: Unlimited 00:07:58.994 Command Set Identifier: NVM (00h) 00:07:58.994 Deallocate: Supported 00:07:58.994 Deallocated/Unwritten Error: Supported 00:07:58.994 Deallocated Read Value: All 0x00 00:07:58.994 Deallocate in Write Zeroes: Not Supported 00:07:58.994 Deallocated Guard Field: 0xFFFF 00:07:58.994 Flush: Supported 00:07:58.994 Reservation: Not Supported 00:07:58.994 Namespace Sharing Capabilities: Private 00:07:58.994 Size (in LBAs): 1048576 (4GiB) 00:07:58.994 Capacity (in LBAs): 1048576 (4GiB) 00:07:58.994 Utilization (in LBAs): 1048576 (4GiB) 00:07:58.994 Thin Provisioning: Not Supported 00:07:58.994 Per-NS Atomic Units: No 00:07:58.994 Maximum Single Source Range Length: 128 00:07:58.994 Maximum Copy Length: 128 00:07:58.994 Maximum Source Range Count: 128 00:07:58.994 NGUID/EUI64 Never Reused: No 00:07:58.994 Namespace Write Protected: No 00:07:58.994 Number of LBA Formats: 8 00:07:58.994 Current LBA Format: LBA Format #04 00:07:58.994 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.994 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.994 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.994 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.994 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.994 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.994 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.994 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.994 00:07:58.994 NVM Specific Namespace Data 00:07:58.994 =========================== 00:07:58.994 Logical Block Storage Tag Mask: 0 00:07:58.994 Protection Information Capabilities: 00:07:58.994 16b Guard Protection Information Storage Tag Support: No 00:07:58.994 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.994 Storage Tag Check Read Support: No 00:07:58.994 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Namespace ID:2 00:07:58.994 Error Recovery Timeout: Unlimited 00:07:58.994 Command Set Identifier: NVM (00h) 00:07:58.994 Deallocate: Supported 00:07:58.994 Deallocated/Unwritten Error: Supported 00:07:58.994 Deallocated Read Value: All 0x00 00:07:58.994 Deallocate in Write Zeroes: Not Supported 00:07:58.994 Deallocated Guard Field: 0xFFFF 00:07:58.994 Flush: Supported 00:07:58.994 Reservation: Not Supported 00:07:58.994 Namespace Sharing Capabilities: Private 00:07:58.994 Size (in LBAs): 1048576 (4GiB) 00:07:58.994 Capacity (in LBAs): 1048576 (4GiB) 00:07:58.994 Utilization (in LBAs): 1048576 (4GiB) 00:07:58.994 Thin Provisioning: Not Supported 00:07:58.994 Per-NS Atomic Units: No 00:07:58.994 Maximum Single Source Range Length: 128 00:07:58.994 Maximum Copy Length: 128 00:07:58.994 Maximum Source Range Count: 128 00:07:58.994 NGUID/EUI64 Never Reused: No 00:07:58.994 Namespace Write Protected: No 00:07:58.994 Number of LBA Formats: 8 00:07:58.994 Current LBA Format: LBA Format #04 00:07:58.994 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.994 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.994 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.994 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.994 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.994 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.994 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.994 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.994 00:07:58.994 NVM Specific Namespace Data 00:07:58.994 =========================== 00:07:58.994 Logical Block Storage Tag Mask: 0 00:07:58.994 Protection Information Capabilities: 00:07:58.994 16b Guard Protection Information Storage Tag Support: No 00:07:58.994 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.994 Storage Tag Check Read Support: No 00:07:58.994 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Namespace ID:3 00:07:58.994 Error Recovery Timeout: Unlimited 00:07:58.994 Command Set Identifier: NVM (00h) 00:07:58.994 Deallocate: Supported 00:07:58.994 Deallocated/Unwritten Error: Supported 00:07:58.994 Deallocated Read Value: All 0x00 00:07:58.994 Deallocate in Write Zeroes: Not Supported 00:07:58.994 Deallocated Guard Field: 0xFFFF 00:07:58.994 Flush: Supported 00:07:58.994 Reservation: Not Supported 00:07:58.994 Namespace Sharing Capabilities: Private 00:07:58.994 Size (in LBAs): 1048576 (4GiB) 00:07:58.994 Capacity (in LBAs): 1048576 (4GiB) 00:07:58.994 Utilization (in LBAs): 1048576 (4GiB) 00:07:58.994 Thin Provisioning: Not Supported 00:07:58.994 Per-NS Atomic Units: No 00:07:58.994 Maximum Single Source Range Length: 128 00:07:58.994 Maximum Copy Length: 128 00:07:58.994 Maximum Source Range Count: 128 00:07:58.994 NGUID/EUI64 Never Reused: No 00:07:58.994 Namespace Write Protected: No 00:07:58.994 Number of LBA Formats: 8 00:07:58.994 Current LBA Format: LBA Format #04 00:07:58.994 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.994 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.994 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.994 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.994 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.994 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.994 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.994 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.994 00:07:58.994 NVM Specific Namespace Data 00:07:58.994 =========================== 00:07:58.994 Logical Block Storage Tag Mask: 0 00:07:58.994 Protection Information Capabilities: 00:07:58.994 16b Guard Protection Information Storage Tag Support: No 00:07:58.994 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:58.994 Storage Tag Check Read Support: No 00:07:58.994 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.994 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:58.995 00:53:21 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:58.995 00:53:21 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:58.995 ===================================================== 00:07:58.995 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:58.995 ===================================================== 00:07:58.995 Controller Capabilities/Features 00:07:58.995 ================================ 00:07:58.995 Vendor ID: 1b36 00:07:58.995 Subsystem Vendor ID: 1af4 00:07:58.995 Serial Number: 12343 00:07:58.995 Model Number: QEMU NVMe Ctrl 00:07:58.995 Firmware Version: 8.0.0 00:07:58.995 Recommended Arb Burst: 6 00:07:58.995 IEEE OUI Identifier: 00 54 52 00:07:58.995 Multi-path I/O 00:07:58.995 May have multiple subsystem ports: No 00:07:58.995 May have multiple controllers: Yes 00:07:58.995 Associated with SR-IOV VF: No 00:07:58.995 Max Data Transfer Size: 524288 00:07:58.995 Max Number of Namespaces: 256 00:07:58.995 Max Number of I/O Queues: 64 00:07:58.995 NVMe Specification Version (VS): 1.4 00:07:58.995 NVMe Specification Version (Identify): 1.4 00:07:58.995 Maximum Queue Entries: 2048 00:07:58.995 Contiguous Queues Required: Yes 00:07:58.995 Arbitration Mechanisms Supported 00:07:58.995 Weighted Round Robin: Not Supported 00:07:58.995 Vendor Specific: Not Supported 00:07:58.995 Reset Timeout: 7500 ms 00:07:58.995 Doorbell Stride: 4 bytes 00:07:58.995 NVM Subsystem Reset: Not Supported 00:07:58.995 Command Sets Supported 00:07:58.995 NVM Command Set: Supported 00:07:58.995 Boot Partition: Not Supported 00:07:58.995 Memory Page Size Minimum: 4096 bytes 00:07:58.995 Memory Page Size Maximum: 65536 bytes 00:07:58.995 Persistent Memory Region: Not Supported 00:07:58.995 Optional Asynchronous Events Supported 00:07:58.995 Namespace Attribute Notices: Supported 00:07:58.995 Firmware Activation Notices: Not Supported 00:07:58.995 ANA Change Notices: Not Supported 00:07:58.995 PLE Aggregate Log Change Notices: Not Supported 00:07:58.995 LBA Status Info Alert Notices: Not Supported 00:07:58.995 EGE Aggregate Log Change Notices: Not Supported 00:07:58.995 Normal NVM Subsystem Shutdown event: Not Supported 00:07:58.995 Zone Descriptor Change Notices: Not Supported 00:07:58.995 Discovery Log Change Notices: Not Supported 00:07:58.995 Controller Attributes 00:07:58.995 128-bit Host Identifier: Not Supported 00:07:58.995 Non-Operational Permissive Mode: Not Supported 00:07:58.995 NVM Sets: Not Supported 00:07:58.995 Read Recovery Levels: Not Supported 00:07:58.995 Endurance Groups: Supported 00:07:58.995 Predictable Latency Mode: Not Supported 00:07:58.995 Traffic Based Keep ALive: Not Supported 00:07:58.995 Namespace Granularity: Not Supported 00:07:58.995 SQ Associations: Not Supported 00:07:58.995 UUID List: Not Supported 00:07:58.995 Multi-Domain Subsystem: Not Supported 00:07:58.995 Fixed Capacity Management: Not Supported 00:07:58.995 Variable Capacity Management: Not Supported 00:07:58.995 Delete Endurance Group: Not Supported 00:07:58.995 Delete NVM Set: Not Supported 00:07:58.995 Extended LBA Formats Supported: Supported 00:07:58.995 Flexible Data Placement Supported: Supported 00:07:58.995 00:07:58.995 Controller Memory Buffer Support 00:07:58.995 ================================ 00:07:58.995 Supported: No 00:07:58.995 00:07:58.995 Persistent Memory Region Support 00:07:58.995 ================================ 00:07:58.995 Supported: No 00:07:58.995 00:07:58.995 Admin Command Set Attributes 00:07:58.995 ============================ 00:07:58.995 Security Send/Receive: Not Supported 00:07:58.995 Format NVM: Supported 00:07:58.995 Firmware Activate/Download: Not Supported 00:07:58.995 Namespace Management: Supported 00:07:58.995 Device Self-Test: Not Supported 00:07:58.995 Directives: Supported 00:07:58.995 NVMe-MI: Not Supported 00:07:58.995 Virtualization Management: Not Supported 00:07:58.995 Doorbell Buffer Config: Supported 00:07:58.995 Get LBA Status Capability: Not Supported 00:07:58.995 Command & Feature Lockdown Capability: Not Supported 00:07:58.995 Abort Command Limit: 4 00:07:58.995 Async Event Request Limit: 4 00:07:58.995 Number of Firmware Slots: N/A 00:07:58.995 Firmware Slot 1 Read-Only: N/A 00:07:58.995 Firmware Activation Without Reset: N/A 00:07:58.995 Multiple Update Detection Support: N/A 00:07:58.995 Firmware Update Granularity: No Information Provided 00:07:58.995 Per-Namespace SMART Log: Yes 00:07:58.995 Asymmetric Namespace Access Log Page: Not Supported 00:07:58.995 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:58.995 Command Effects Log Page: Supported 00:07:58.995 Get Log Page Extended Data: Supported 00:07:58.995 Telemetry Log Pages: Not Supported 00:07:58.995 Persistent Event Log Pages: Not Supported 00:07:58.995 Supported Log Pages Log Page: May Support 00:07:58.995 Commands Supported & Effects Log Page: Not Supported 00:07:58.995 Feature Identifiers & Effects Log Page:May Support 00:07:58.995 NVMe-MI Commands & Effects Log Page: May Support 00:07:58.995 Data Area 4 for Telemetry Log: Not Supported 00:07:58.995 Error Log Page Entries Supported: 1 00:07:58.995 Keep Alive: Not Supported 00:07:58.995 00:07:58.995 NVM Command Set Attributes 00:07:58.995 ========================== 00:07:58.995 Submission Queue Entry Size 00:07:58.995 Max: 64 00:07:58.995 Min: 64 00:07:58.995 Completion Queue Entry Size 00:07:58.995 Max: 16 00:07:58.995 Min: 16 00:07:58.995 Number of Namespaces: 256 00:07:58.995 Compare Command: Supported 00:07:58.995 Write Uncorrectable Command: Not Supported 00:07:58.995 Dataset Management Command: Supported 00:07:58.995 Write Zeroes Command: Supported 00:07:58.995 Set Features Save Field: Supported 00:07:58.995 Reservations: Not Supported 00:07:58.995 Timestamp: Supported 00:07:58.995 Copy: Supported 00:07:58.995 Volatile Write Cache: Present 00:07:58.995 Atomic Write Unit (Normal): 1 00:07:58.995 Atomic Write Unit (PFail): 1 00:07:58.995 Atomic Compare & Write Unit: 1 00:07:58.995 Fused Compare & Write: Not Supported 00:07:58.995 Scatter-Gather List 00:07:58.995 SGL Command Set: Supported 00:07:58.995 SGL Keyed: Not Supported 00:07:58.995 SGL Bit Bucket Descriptor: Not Supported 00:07:58.995 SGL Metadata Pointer: Not Supported 00:07:58.995 Oversized SGL: Not Supported 00:07:58.995 SGL Metadata Address: Not Supported 00:07:58.995 SGL Offset: Not Supported 00:07:58.995 Transport SGL Data Block: Not Supported 00:07:58.995 Replay Protected Memory Block: Not Supported 00:07:58.995 00:07:58.995 Firmware Slot Information 00:07:58.995 ========================= 00:07:58.995 Active slot: 1 00:07:58.995 Slot 1 Firmware Revision: 1.0 00:07:58.995 00:07:58.995 00:07:58.995 Commands Supported and Effects 00:07:58.995 ============================== 00:07:58.995 Admin Commands 00:07:58.995 -------------- 00:07:58.995 Delete I/O Submission Queue (00h): Supported 00:07:58.995 Create I/O Submission Queue (01h): Supported 00:07:58.995 Get Log Page (02h): Supported 00:07:58.995 Delete I/O Completion Queue (04h): Supported 00:07:58.995 Create I/O Completion Queue (05h): Supported 00:07:58.995 Identify (06h): Supported 00:07:58.995 Abort (08h): Supported 00:07:58.995 Set Features (09h): Supported 00:07:58.995 Get Features (0Ah): Supported 00:07:58.995 Asynchronous Event Request (0Ch): Supported 00:07:58.995 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:58.995 Directive Send (19h): Supported 00:07:58.995 Directive Receive (1Ah): Supported 00:07:58.995 Virtualization Management (1Ch): Supported 00:07:58.995 Doorbell Buffer Config (7Ch): Supported 00:07:58.995 Format NVM (80h): Supported LBA-Change 00:07:58.995 I/O Commands 00:07:58.995 ------------ 00:07:58.995 Flush (00h): Supported LBA-Change 00:07:58.995 Write (01h): Supported LBA-Change 00:07:58.995 Read (02h): Supported 00:07:58.995 Compare (05h): Supported 00:07:58.995 Write Zeroes (08h): Supported LBA-Change 00:07:58.995 Dataset Management (09h): Supported LBA-Change 00:07:58.995 Unknown (0Ch): Supported 00:07:58.995 Unknown (12h): Supported 00:07:58.995 Copy (19h): Supported LBA-Change 00:07:58.995 Unknown (1Dh): Supported LBA-Change 00:07:58.995 00:07:58.995 Error Log 00:07:58.995 ========= 00:07:58.995 00:07:58.995 Arbitration 00:07:58.995 =========== 00:07:58.995 Arbitration Burst: no limit 00:07:58.995 00:07:58.995 Power Management 00:07:58.995 ================ 00:07:58.995 Number of Power States: 1 00:07:58.995 Current Power State: Power State #0 00:07:58.995 Power State #0: 00:07:58.995 Max Power: 25.00 W 00:07:58.995 Non-Operational State: Operational 00:07:58.995 Entry Latency: 16 microseconds 00:07:58.995 Exit Latency: 4 microseconds 00:07:58.995 Relative Read Throughput: 0 00:07:58.995 Relative Read Latency: 0 00:07:58.996 Relative Write Throughput: 0 00:07:58.996 Relative Write Latency: 0 00:07:58.996 Idle Power: Not Reported 00:07:58.996 Active Power: Not Reported 00:07:58.996 Non-Operational Permissive Mode: Not Supported 00:07:58.996 00:07:58.996 Health Information 00:07:58.996 ================== 00:07:58.996 Critical Warnings: 00:07:58.996 Available Spare Space: OK 00:07:58.996 Temperature: OK 00:07:58.996 Device Reliability: OK 00:07:58.996 Read Only: No 00:07:58.996 Volatile Memory Backup: OK 00:07:58.996 Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.996 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:58.996 Available Spare: 0% 00:07:58.996 Available Spare Threshold: 0% 00:07:58.996 Life Percentage Used: 0% 00:07:58.996 Data Units Read: 757 00:07:58.996 Data Units Written: 686 00:07:58.996 Host Read Commands: 37273 00:07:58.996 Host Write Commands: 36696 00:07:58.996 Controller Busy Time: 0 minutes 00:07:58.996 Power Cycles: 0 00:07:58.996 Power On Hours: 0 hours 00:07:58.996 Unsafe Shutdowns: 0 00:07:58.996 Unrecoverable Media Errors: 0 00:07:58.996 Lifetime Error Log Entries: 0 00:07:58.996 Warning Temperature Time: 0 minutes 00:07:58.996 Critical Temperature Time: 0 minutes 00:07:58.996 00:07:58.996 Number of Queues 00:07:58.996 ================ 00:07:58.996 Number of I/O Submission Queues: 64 00:07:58.996 Number of I/O Completion Queues: 64 00:07:58.996 00:07:58.996 ZNS Specific Controller Data 00:07:58.996 ============================ 00:07:58.996 Zone Append Size Limit: 0 00:07:58.996 00:07:58.996 00:07:58.996 Active Namespaces 00:07:58.996 ================= 00:07:58.996 Namespace ID:1 00:07:58.996 Error Recovery Timeout: Unlimited 00:07:58.996 Command Set Identifier: NVM (00h) 00:07:58.996 Deallocate: Supported 00:07:58.996 Deallocated/Unwritten Error: Supported 00:07:58.996 Deallocated Read Value: All 0x00 00:07:58.996 Deallocate in Write Zeroes: Not Supported 00:07:58.996 Deallocated Guard Field: 0xFFFF 00:07:58.996 Flush: Supported 00:07:58.996 Reservation: Not Supported 00:07:58.996 Namespace Sharing Capabilities: Multiple Controllers 00:07:58.996 Size (in LBAs): 262144 (1GiB) 00:07:58.996 Capacity (in LBAs): 262144 (1GiB) 00:07:58.996 Utilization (in LBAs): 262144 (1GiB) 00:07:58.996 Thin Provisioning: Not Supported 00:07:58.996 Per-NS Atomic Units: No 00:07:58.996 Maximum Single Source Range Length: 128 00:07:58.996 Maximum Copy Length: 128 00:07:58.996 Maximum Source Range Count: 128 00:07:58.996 NGUID/EUI64 Never Reused: No 00:07:58.996 Namespace Write Protected: No 00:07:58.996 Endurance group ID: 1 00:07:58.996 Number of LBA Formats: 8 00:07:58.996 Current LBA Format: LBA Format #04 00:07:58.996 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:58.996 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:58.996 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:58.996 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:58.996 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:58.996 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:58.996 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:58.996 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:58.996 00:07:58.996 Get Feature FDP: 00:07:58.996 ================ 00:07:58.996 Enabled: Yes 00:07:58.996 FDP configuration index: 0 00:07:58.996 00:07:58.996 FDP configurations log page 00:07:58.996 =========================== 00:07:58.996 Number of FDP configurations: 1 00:07:58.996 Version: 0 00:07:58.996 Size: 112 00:07:58.996 FDP Configuration Descriptor: 0 00:07:58.996 Descriptor Size: 96 00:07:58.996 Reclaim Group Identifier format: 2 00:07:58.996 FDP Volatile Write Cache: Not Present 00:07:58.996 FDP Configuration: Valid 00:07:58.996 Vendor Specific Size: 0 00:07:58.996 Number of Reclaim Groups: 2 00:07:58.996 Number of Recalim Unit Handles: 8 00:07:58.996 Max Placement Identifiers: 128 00:07:58.996 Number of Namespaces Suppprted: 256 00:07:58.996 Reclaim unit Nominal Size: 6000000 bytes 00:07:58.996 Estimated Reclaim Unit Time Limit: Not Reported 00:07:58.996 RUH Desc #000: RUH Type: Initially Isolated 00:07:58.996 RUH Desc #001: RUH Type: Initially Isolated 00:07:58.996 RUH Desc #002: RUH Type: Initially Isolated 00:07:58.996 RUH Desc #003: RUH Type: Initially Isolated 00:07:58.996 RUH Desc #004: RUH Type: Initially Isolated 00:07:58.996 RUH Desc #005: RUH Type: Initially Isolated 00:07:58.996 RUH Desc #006: RUH Type: Initially Isolated 00:07:58.996 RUH Desc #007: RUH Type: Initially Isolated 00:07:58.996 00:07:58.996 FDP reclaim unit handle usage log page 00:07:59.255 ====================================== 00:07:59.255 Number of Reclaim Unit Handles: 8 00:07:59.255 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:59.255 RUH Usage Desc #001: RUH Attributes: Unused 00:07:59.255 RUH Usage Desc #002: RUH Attributes: Unused 00:07:59.255 RUH Usage Desc #003: RUH Attributes: Unused 00:07:59.255 RUH Usage Desc #004: RUH Attributes: Unused 00:07:59.255 RUH Usage Desc #005: RUH Attributes: Unused 00:07:59.255 RUH Usage Desc #006: RUH Attributes: Unused 00:07:59.255 RUH Usage Desc #007: RUH Attributes: Unused 00:07:59.255 00:07:59.255 FDP statistics log page 00:07:59.255 ======================= 00:07:59.255 Host bytes with metadata written: 431955968 00:07:59.255 Media bytes with metadata written: 431988736 00:07:59.255 Media bytes erased: 0 00:07:59.255 00:07:59.255 FDP events log page 00:07:59.255 =================== 00:07:59.255 Number of FDP events: 0 00:07:59.255 00:07:59.255 NVM Specific Namespace Data 00:07:59.255 =========================== 00:07:59.255 Logical Block Storage Tag Mask: 0 00:07:59.255 Protection Information Capabilities: 00:07:59.255 16b Guard Protection Information Storage Tag Support: No 00:07:59.255 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:59.255 Storage Tag Check Read Support: No 00:07:59.255 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.255 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.255 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.255 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.255 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.255 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.255 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.255 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:59.255 00:07:59.255 real 0m1.156s 00:07:59.255 user 0m0.427s 00:07:59.255 sys 0m0.515s 00:07:59.255 00:53:21 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.255 ************************************ 00:07:59.255 END TEST nvme_identify 00:07:59.255 ************************************ 00:07:59.255 00:53:21 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:59.255 00:53:21 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:59.255 00:53:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.255 00:53:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.255 00:53:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.255 ************************************ 00:07:59.255 START TEST nvme_perf 00:07:59.255 ************************************ 00:07:59.255 00:53:21 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:59.255 00:53:21 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:00.631 Initializing NVMe Controllers 00:08:00.631 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:00.631 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:00.631 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:00.631 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:00.631 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:00.631 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:00.631 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:00.631 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:00.631 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:00.631 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:00.631 Initialization complete. Launching workers. 00:08:00.631 ======================================================== 00:08:00.631 Latency(us) 00:08:00.631 Device Information : IOPS MiB/s Average min max 00:08:00.631 PCIE (0000:00:10.0) NSID 1 from core 0: 17918.98 209.99 7145.39 4436.44 25748.35 00:08:00.631 PCIE (0000:00:11.0) NSID 1 from core 0: 17918.98 209.99 7140.81 4333.40 25854.89 00:08:00.631 PCIE (0000:00:13.0) NSID 1 from core 0: 17918.98 209.99 7134.80 3878.72 25978.97 00:08:00.631 PCIE (0000:00:12.0) NSID 1 from core 0: 17918.98 209.99 7128.82 3663.14 26084.18 00:08:00.631 PCIE (0000:00:12.0) NSID 2 from core 0: 17918.98 209.99 7122.70 3522.02 25844.31 00:08:00.631 PCIE (0000:00:12.0) NSID 3 from core 0: 17918.98 209.99 7116.46 3275.19 25162.23 00:08:00.631 ======================================================== 00:08:00.631 Total : 107513.87 1259.93 7131.50 3275.19 26084.18 00:08:00.631 00:08:00.631 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:00.631 ================================================================================= 00:08:00.631 1.00000% : 5772.209us 00:08:00.631 10.00000% : 5973.858us 00:08:00.631 25.00000% : 6200.714us 00:08:00.631 50.00000% : 6553.600us 00:08:00.631 75.00000% : 6906.486us 00:08:00.631 90.00000% : 9880.812us 00:08:00.631 95.00000% : 11241.945us 00:08:00.631 98.00000% : 12703.902us 00:08:00.631 99.00000% : 14115.446us 00:08:00.631 99.50000% : 16434.412us 00:08:00.631 99.90000% : 25206.154us 00:08:00.631 99.99000% : 25710.277us 00:08:00.631 99.99900% : 25811.102us 00:08:00.631 99.99990% : 25811.102us 00:08:00.631 99.99999% : 25811.102us 00:08:00.631 00:08:00.631 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:00.631 ================================================================================= 00:08:00.631 1.00000% : 5847.828us 00:08:00.631 10.00000% : 6024.271us 00:08:00.631 25.00000% : 6225.920us 00:08:00.631 50.00000% : 6503.188us 00:08:00.631 75.00000% : 6856.074us 00:08:00.631 90.00000% : 9880.812us 00:08:00.631 95.00000% : 11141.120us 00:08:00.631 98.00000% : 12552.665us 00:08:00.631 99.00000% : 13812.972us 00:08:00.631 99.50000% : 17039.360us 00:08:00.631 99.90000% : 25407.803us 00:08:00.631 99.99000% : 26012.751us 00:08:00.631 99.99900% : 26012.751us 00:08:00.631 99.99990% : 26012.751us 00:08:00.631 99.99999% : 26012.751us 00:08:00.631 00:08:00.631 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:00.631 ================================================================================= 00:08:00.631 1.00000% : 5847.828us 00:08:00.631 10.00000% : 6024.271us 00:08:00.631 25.00000% : 6225.920us 00:08:00.631 50.00000% : 6503.188us 00:08:00.631 75.00000% : 6856.074us 00:08:00.631 90.00000% : 10032.049us 00:08:00.631 95.00000% : 11040.295us 00:08:00.631 98.00000% : 12552.665us 00:08:00.631 99.00000% : 14216.271us 00:08:00.631 99.50000% : 17140.185us 00:08:00.631 99.90000% : 25609.452us 00:08:00.631 99.99000% : 26012.751us 00:08:00.631 99.99900% : 26012.751us 00:08:00.631 99.99990% : 26012.751us 00:08:00.631 99.99999% : 26012.751us 00:08:00.631 00:08:00.631 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:00.631 ================================================================================= 00:08:00.631 1.00000% : 5822.622us 00:08:00.631 10.00000% : 6024.271us 00:08:00.631 25.00000% : 6225.920us 00:08:00.631 50.00000% : 6503.188us 00:08:00.631 75.00000% : 6856.074us 00:08:00.631 90.00000% : 9931.225us 00:08:00.631 95.00000% : 11141.120us 00:08:00.631 98.00000% : 12351.015us 00:08:00.631 99.00000% : 14518.745us 00:08:00.631 99.50000% : 17241.009us 00:08:00.631 99.90000% : 25710.277us 00:08:00.631 99.99000% : 26214.400us 00:08:00.631 99.99900% : 26214.400us 00:08:00.631 99.99990% : 26214.400us 00:08:00.631 99.99999% : 26214.400us 00:08:00.631 00:08:00.631 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:00.631 ================================================================================= 00:08:00.631 1.00000% : 5847.828us 00:08:00.631 10.00000% : 6024.271us 00:08:00.631 25.00000% : 6225.920us 00:08:00.631 50.00000% : 6503.188us 00:08:00.631 75.00000% : 6856.074us 00:08:00.631 90.00000% : 9931.225us 00:08:00.631 95.00000% : 11141.120us 00:08:00.631 98.00000% : 12149.366us 00:08:00.631 99.00000% : 14417.920us 00:08:00.631 99.50000% : 17442.658us 00:08:00.631 99.90000% : 25609.452us 00:08:00.631 99.99000% : 26012.751us 00:08:00.631 99.99900% : 26012.751us 00:08:00.631 99.99990% : 26012.751us 00:08:00.631 99.99999% : 26012.751us 00:08:00.631 00:08:00.631 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:00.631 ================================================================================= 00:08:00.631 1.00000% : 5822.622us 00:08:00.631 10.00000% : 6024.271us 00:08:00.631 25.00000% : 6225.920us 00:08:00.631 50.00000% : 6503.188us 00:08:00.631 75.00000% : 6856.074us 00:08:00.631 90.00000% : 9931.225us 00:08:00.631 95.00000% : 11191.532us 00:08:00.631 98.00000% : 12149.366us 00:08:00.631 99.00000% : 13611.323us 00:08:00.631 99.50000% : 17442.658us 00:08:00.631 99.90000% : 24903.680us 00:08:00.631 99.99000% : 25206.154us 00:08:00.631 99.99900% : 25206.154us 00:08:00.632 99.99990% : 25206.154us 00:08:00.632 99.99999% : 25206.154us 00:08:00.632 00:08:00.632 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:00.632 ============================================================================== 00:08:00.632 Range in us Cumulative IO count 00:08:00.632 4436.283 - 4461.489: 0.0223% ( 4) 00:08:00.632 4461.489 - 4486.695: 0.0279% ( 1) 00:08:00.632 4486.695 - 4511.902: 0.0446% ( 3) 00:08:00.632 4511.902 - 4537.108: 0.0558% ( 2) 00:08:00.632 4537.108 - 4562.314: 0.0670% ( 2) 00:08:00.632 4562.314 - 4587.520: 0.0725% ( 1) 00:08:00.632 4587.520 - 4612.726: 0.0837% ( 2) 00:08:00.632 4612.726 - 4637.932: 0.1004% ( 3) 00:08:00.632 4637.932 - 4663.138: 0.1116% ( 2) 00:08:00.632 4663.138 - 4688.345: 0.1172% ( 1) 00:08:00.632 4688.345 - 4713.551: 0.1283% ( 2) 00:08:00.632 4713.551 - 4738.757: 0.1339% ( 1) 00:08:00.632 4738.757 - 4763.963: 0.1451% ( 2) 00:08:00.632 4763.963 - 4789.169: 0.1562% ( 2) 00:08:00.632 4789.169 - 4814.375: 0.1674% ( 2) 00:08:00.632 4814.375 - 4839.582: 0.1730% ( 1) 00:08:00.632 4839.582 - 4864.788: 0.1897% ( 3) 00:08:00.632 4889.994 - 4915.200: 0.2065% ( 3) 00:08:00.632 4915.200 - 4940.406: 0.2176% ( 2) 00:08:00.632 4940.406 - 4965.612: 0.2288% ( 2) 00:08:00.632 4965.612 - 4990.818: 0.2400% ( 2) 00:08:00.632 4990.818 - 5016.025: 0.2455% ( 1) 00:08:00.632 5016.025 - 5041.231: 0.2623% ( 3) 00:08:00.632 5041.231 - 5066.437: 0.2679% ( 1) 00:08:00.632 5066.437 - 5091.643: 0.2846% ( 3) 00:08:00.632 5091.643 - 5116.849: 0.2902% ( 1) 00:08:00.632 5116.849 - 5142.055: 0.3069% ( 3) 00:08:00.632 5142.055 - 5167.262: 0.3181% ( 2) 00:08:00.632 5167.262 - 5192.468: 0.3237% ( 1) 00:08:00.632 5192.468 - 5217.674: 0.3404% ( 3) 00:08:00.632 5217.674 - 5242.880: 0.3460% ( 1) 00:08:00.632 5242.880 - 5268.086: 0.3571% ( 2) 00:08:00.632 5671.385 - 5696.591: 0.3627% ( 1) 00:08:00.632 5696.591 - 5721.797: 0.4408% ( 14) 00:08:00.632 5721.797 - 5747.003: 0.6641% ( 40) 00:08:00.632 5747.003 - 5772.209: 1.0938% ( 77) 00:08:00.632 5772.209 - 5797.415: 1.6741% ( 104) 00:08:00.632 5797.415 - 5822.622: 2.4275% ( 135) 00:08:00.632 5822.622 - 5847.828: 3.3873% ( 172) 00:08:00.632 5847.828 - 5873.034: 4.6931% ( 234) 00:08:00.632 5873.034 - 5898.240: 6.0658% ( 246) 00:08:00.632 5898.240 - 5923.446: 7.5558% ( 267) 00:08:00.632 5923.446 - 5948.652: 9.0234% ( 263) 00:08:00.632 5948.652 - 5973.858: 10.7254% ( 305) 00:08:00.632 5973.858 - 5999.065: 12.3996% ( 300) 00:08:00.632 5999.065 - 6024.271: 14.1239% ( 309) 00:08:00.632 6024.271 - 6049.477: 15.8203% ( 304) 00:08:00.632 6049.477 - 6074.683: 17.5446% ( 309) 00:08:00.632 6074.683 - 6099.889: 19.3862% ( 330) 00:08:00.632 6099.889 - 6125.095: 21.1942% ( 324) 00:08:00.632 6125.095 - 6150.302: 23.1585% ( 352) 00:08:00.632 6150.302 - 6175.508: 24.9888% ( 328) 00:08:00.632 6175.508 - 6200.714: 26.7578% ( 317) 00:08:00.632 6200.714 - 6225.920: 28.7444% ( 356) 00:08:00.632 6225.920 - 6251.126: 30.4520% ( 306) 00:08:00.632 6251.126 - 6276.332: 32.3438% ( 339) 00:08:00.632 6276.332 - 6301.538: 34.1629% ( 326) 00:08:00.632 6301.538 - 6326.745: 36.0212% ( 333) 00:08:00.632 6326.745 - 6351.951: 37.9743% ( 350) 00:08:00.632 6351.951 - 6377.157: 39.9107% ( 347) 00:08:00.632 6377.157 - 6402.363: 41.7355% ( 327) 00:08:00.632 6402.363 - 6427.569: 43.6049% ( 335) 00:08:00.632 6427.569 - 6452.775: 45.5469% ( 348) 00:08:00.632 6452.775 - 6503.188: 49.4308% ( 696) 00:08:00.632 6503.188 - 6553.600: 53.3984% ( 711) 00:08:00.632 6553.600 - 6604.012: 57.2656% ( 693) 00:08:00.632 6604.012 - 6654.425: 61.2165% ( 708) 00:08:00.632 6654.425 - 6704.837: 64.9498% ( 669) 00:08:00.632 6704.837 - 6755.249: 68.5379% ( 643) 00:08:00.632 6755.249 - 6805.662: 71.6797% ( 563) 00:08:00.632 6805.662 - 6856.074: 74.1518% ( 443) 00:08:00.632 6856.074 - 6906.486: 75.8371% ( 302) 00:08:00.632 6906.486 - 6956.898: 77.0647% ( 220) 00:08:00.632 6956.898 - 7007.311: 78.0525% ( 177) 00:08:00.632 7007.311 - 7057.723: 78.7109% ( 118) 00:08:00.632 7057.723 - 7108.135: 79.2076% ( 89) 00:08:00.632 7108.135 - 7158.548: 79.6205% ( 74) 00:08:00.632 7158.548 - 7208.960: 79.9944% ( 67) 00:08:00.632 7208.960 - 7259.372: 80.3571% ( 65) 00:08:00.632 7259.372 - 7309.785: 80.7533% ( 71) 00:08:00.632 7309.785 - 7360.197: 81.0938% ( 61) 00:08:00.632 7360.197 - 7410.609: 81.4007% ( 55) 00:08:00.632 7410.609 - 7461.022: 81.7076% ( 55) 00:08:00.632 7461.022 - 7511.434: 81.9922% ( 51) 00:08:00.632 7511.434 - 7561.846: 82.2600% ( 48) 00:08:00.632 7561.846 - 7612.258: 82.4833% ( 40) 00:08:00.632 7612.258 - 7662.671: 82.7455% ( 47) 00:08:00.632 7662.671 - 7713.083: 82.9911% ( 44) 00:08:00.632 7713.083 - 7763.495: 83.2422% ( 45) 00:08:00.632 7763.495 - 7813.908: 83.4821% ( 43) 00:08:00.632 7813.908 - 7864.320: 83.7054% ( 40) 00:08:00.632 7864.320 - 7914.732: 83.8672% ( 29) 00:08:00.632 7914.732 - 7965.145: 84.0346% ( 30) 00:08:00.632 7965.145 - 8015.557: 84.2020% ( 30) 00:08:00.632 8015.557 - 8065.969: 84.3471% ( 26) 00:08:00.632 8065.969 - 8116.382: 84.4643% ( 21) 00:08:00.632 8116.382 - 8166.794: 84.5703% ( 19) 00:08:00.632 8166.794 - 8217.206: 84.6875% ( 21) 00:08:00.632 8217.206 - 8267.618: 84.8047% ( 21) 00:08:00.632 8267.618 - 8318.031: 84.8996% ( 17) 00:08:00.632 8318.031 - 8368.443: 85.0112% ( 20) 00:08:00.632 8368.443 - 8418.855: 85.0837% ( 13) 00:08:00.632 8418.855 - 8469.268: 85.1897% ( 19) 00:08:00.632 8469.268 - 8519.680: 85.2790% ( 16) 00:08:00.632 8519.680 - 8570.092: 85.3906% ( 20) 00:08:00.632 8570.092 - 8620.505: 85.5022% ( 20) 00:08:00.632 8620.505 - 8670.917: 85.6194% ( 21) 00:08:00.632 8670.917 - 8721.329: 85.7366% ( 21) 00:08:00.632 8721.329 - 8771.742: 85.8817% ( 26) 00:08:00.632 8771.742 - 8822.154: 86.0491% ( 30) 00:08:00.632 8822.154 - 8872.566: 86.1830% ( 24) 00:08:00.632 8872.566 - 8922.978: 86.3728% ( 34) 00:08:00.632 8922.978 - 8973.391: 86.5402% ( 30) 00:08:00.632 8973.391 - 9023.803: 86.7076% ( 30) 00:08:00.632 9023.803 - 9074.215: 86.9029% ( 35) 00:08:00.632 9074.215 - 9124.628: 87.0759% ( 31) 00:08:00.632 9124.628 - 9175.040: 87.3047% ( 41) 00:08:00.632 9175.040 - 9225.452: 87.4888% ( 33) 00:08:00.632 9225.452 - 9275.865: 87.7288% ( 43) 00:08:00.632 9275.865 - 9326.277: 87.9576% ( 41) 00:08:00.632 9326.277 - 9376.689: 88.1529% ( 35) 00:08:00.632 9376.689 - 9427.102: 88.3817% ( 41) 00:08:00.632 9427.102 - 9477.514: 88.6049% ( 40) 00:08:00.632 9477.514 - 9527.926: 88.8058% ( 36) 00:08:00.632 9527.926 - 9578.338: 89.0067% ( 36) 00:08:00.632 9578.338 - 9628.751: 89.1406% ( 24) 00:08:00.632 9628.751 - 9679.163: 89.2969% ( 28) 00:08:00.632 9679.163 - 9729.575: 89.4810% ( 33) 00:08:00.632 9729.575 - 9779.988: 89.6763% ( 35) 00:08:00.632 9779.988 - 9830.400: 89.8717% ( 35) 00:08:00.632 9830.400 - 9880.812: 90.0446% ( 31) 00:08:00.632 9880.812 - 9931.225: 90.2232% ( 32) 00:08:00.632 9931.225 - 9981.637: 90.3906% ( 30) 00:08:00.632 9981.637 - 10032.049: 90.5525% ( 29) 00:08:00.632 10032.049 - 10082.462: 90.7199% ( 30) 00:08:00.632 10082.462 - 10132.874: 90.8482% ( 23) 00:08:00.632 10132.874 - 10183.286: 90.9877% ( 25) 00:08:00.632 10183.286 - 10233.698: 91.1607% ( 31) 00:08:00.632 10233.698 - 10284.111: 91.3393% ( 32) 00:08:00.632 10284.111 - 10334.523: 91.4900% ( 27) 00:08:00.632 10334.523 - 10384.935: 91.6239% ( 24) 00:08:00.632 10384.935 - 10435.348: 91.8136% ( 34) 00:08:00.632 10435.348 - 10485.760: 91.9475% ( 24) 00:08:00.632 10485.760 - 10536.172: 92.1373% ( 34) 00:08:00.632 10536.172 - 10586.585: 92.3661% ( 41) 00:08:00.632 10586.585 - 10636.997: 92.5391% ( 31) 00:08:00.632 10636.997 - 10687.409: 92.7511% ( 38) 00:08:00.632 10687.409 - 10737.822: 92.9464% ( 35) 00:08:00.632 10737.822 - 10788.234: 93.1529% ( 37) 00:08:00.632 10788.234 - 10838.646: 93.3650% ( 38) 00:08:00.632 10838.646 - 10889.058: 93.6328% ( 48) 00:08:00.632 10889.058 - 10939.471: 93.7891% ( 28) 00:08:00.632 10939.471 - 10989.883: 93.9676% ( 32) 00:08:00.632 10989.883 - 11040.295: 94.2076% ( 43) 00:08:00.632 11040.295 - 11090.708: 94.3917% ( 33) 00:08:00.632 11090.708 - 11141.120: 94.5871% ( 35) 00:08:00.632 11141.120 - 11191.532: 94.8382% ( 45) 00:08:00.632 11191.532 - 11241.945: 95.0167% ( 32) 00:08:00.632 11241.945 - 11292.357: 95.1897% ( 31) 00:08:00.632 11292.357 - 11342.769: 95.4464% ( 46) 00:08:00.632 11342.769 - 11393.182: 95.6250% ( 32) 00:08:00.632 11393.182 - 11443.594: 95.8259% ( 36) 00:08:00.632 11443.594 - 11494.006: 96.0379% ( 38) 00:08:00.632 11494.006 - 11544.418: 96.1719% ( 24) 00:08:00.632 11544.418 - 11594.831: 96.3393% ( 30) 00:08:00.632 11594.831 - 11645.243: 96.4788% ( 25) 00:08:00.632 11645.243 - 11695.655: 96.6239% ( 26) 00:08:00.632 11695.655 - 11746.068: 96.7578% ( 24) 00:08:00.632 11746.068 - 11796.480: 96.8471% ( 16) 00:08:00.632 11796.480 - 11846.892: 96.9531% ( 19) 00:08:00.632 11846.892 - 11897.305: 97.0312% ( 14) 00:08:00.632 11897.305 - 11947.717: 97.1038% ( 13) 00:08:00.632 11947.717 - 11998.129: 97.1819% ( 14) 00:08:00.632 11998.129 - 12048.542: 97.2656% ( 15) 00:08:00.632 12048.542 - 12098.954: 97.3214% ( 10) 00:08:00.632 12098.954 - 12149.366: 97.3772% ( 10) 00:08:00.632 12149.366 - 12199.778: 97.4275% ( 9) 00:08:00.632 12199.778 - 12250.191: 97.5112% ( 15) 00:08:00.632 12250.191 - 12300.603: 97.5949% ( 15) 00:08:00.632 12300.603 - 12351.015: 97.6562% ( 11) 00:08:00.632 12351.015 - 12401.428: 97.7232% ( 12) 00:08:00.632 12401.428 - 12451.840: 97.7734% ( 9) 00:08:00.632 12451.840 - 12502.252: 97.8013% ( 5) 00:08:00.632 12502.252 - 12552.665: 97.8795% ( 14) 00:08:00.633 12552.665 - 12603.077: 97.9129% ( 6) 00:08:00.633 12603.077 - 12653.489: 97.9743% ( 11) 00:08:00.633 12653.489 - 12703.902: 98.0190% ( 8) 00:08:00.633 12703.902 - 12754.314: 98.0804% ( 11) 00:08:00.633 12754.314 - 12804.726: 98.1250% ( 8) 00:08:00.633 12804.726 - 12855.138: 98.1696% ( 8) 00:08:00.633 12855.138 - 12905.551: 98.2366% ( 12) 00:08:00.633 12905.551 - 13006.375: 98.3203% ( 15) 00:08:00.633 13006.375 - 13107.200: 98.3929% ( 13) 00:08:00.633 13107.200 - 13208.025: 98.4487% ( 10) 00:08:00.633 13208.025 - 13308.849: 98.5324% ( 15) 00:08:00.633 13308.849 - 13409.674: 98.5882% ( 10) 00:08:00.633 13409.674 - 13510.498: 98.6328% ( 8) 00:08:00.633 13510.498 - 13611.323: 98.7109% ( 14) 00:08:00.633 13611.323 - 13712.148: 98.7779% ( 12) 00:08:00.633 13712.148 - 13812.972: 98.8504% ( 13) 00:08:00.633 13812.972 - 13913.797: 98.9286% ( 14) 00:08:00.633 13913.797 - 14014.622: 98.9900% ( 11) 00:08:00.633 14014.622 - 14115.446: 99.0625% ( 13) 00:08:00.633 14115.446 - 14216.271: 99.1350% ( 13) 00:08:00.633 14216.271 - 14317.095: 99.2020% ( 12) 00:08:00.633 14317.095 - 14417.920: 99.2411% ( 7) 00:08:00.633 14417.920 - 14518.745: 99.2746% ( 6) 00:08:00.633 14518.745 - 14619.569: 99.2857% ( 2) 00:08:00.633 15728.640 - 15829.465: 99.3192% ( 6) 00:08:00.633 15829.465 - 15930.289: 99.3471% ( 5) 00:08:00.633 15930.289 - 16031.114: 99.3806% ( 6) 00:08:00.633 16031.114 - 16131.938: 99.4141% ( 6) 00:08:00.633 16131.938 - 16232.763: 99.4475% ( 6) 00:08:00.633 16232.763 - 16333.588: 99.4810% ( 6) 00:08:00.633 16333.588 - 16434.412: 99.5145% ( 6) 00:08:00.633 16434.412 - 16535.237: 99.5424% ( 5) 00:08:00.633 16535.237 - 16636.062: 99.5815% ( 7) 00:08:00.633 16636.062 - 16736.886: 99.6094% ( 5) 00:08:00.633 16736.886 - 16837.711: 99.6373% ( 5) 00:08:00.633 16837.711 - 16938.535: 99.6429% ( 1) 00:08:00.633 23693.785 - 23794.609: 99.6540% ( 2) 00:08:00.633 23794.609 - 23895.434: 99.6652% ( 2) 00:08:00.633 23895.434 - 23996.258: 99.6819% ( 3) 00:08:00.633 23996.258 - 24097.083: 99.7042% ( 4) 00:08:00.633 24097.083 - 24197.908: 99.7210% ( 3) 00:08:00.633 24197.908 - 24298.732: 99.7433% ( 4) 00:08:00.633 24298.732 - 24399.557: 99.7545% ( 2) 00:08:00.633 24399.557 - 24500.382: 99.7712% ( 3) 00:08:00.633 24500.382 - 24601.206: 99.7935% ( 4) 00:08:00.633 24601.206 - 24702.031: 99.8103% ( 3) 00:08:00.633 24702.031 - 24802.855: 99.8270% ( 3) 00:08:00.633 24802.855 - 24903.680: 99.8493% ( 4) 00:08:00.633 24903.680 - 25004.505: 99.8661% ( 3) 00:08:00.633 25004.505 - 25105.329: 99.8828% ( 3) 00:08:00.633 25105.329 - 25206.154: 99.9051% ( 4) 00:08:00.633 25206.154 - 25306.978: 99.9219% ( 3) 00:08:00.633 25306.978 - 25407.803: 99.9386% ( 3) 00:08:00.633 25407.803 - 25508.628: 99.9554% ( 3) 00:08:00.633 25508.628 - 25609.452: 99.9777% ( 4) 00:08:00.633 25609.452 - 25710.277: 99.9944% ( 3) 00:08:00.633 25710.277 - 25811.102: 100.0000% ( 1) 00:08:00.633 00:08:00.633 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:00.633 ============================================================================== 00:08:00.633 Range in us Cumulative IO count 00:08:00.633 4310.252 - 4335.458: 0.0056% ( 1) 00:08:00.633 4335.458 - 4360.665: 0.0167% ( 2) 00:08:00.633 4360.665 - 4385.871: 0.0335% ( 3) 00:08:00.633 4385.871 - 4411.077: 0.0446% ( 2) 00:08:00.633 4411.077 - 4436.283: 0.0558% ( 2) 00:08:00.633 4436.283 - 4461.489: 0.0670% ( 2) 00:08:00.633 4461.489 - 4486.695: 0.0781% ( 2) 00:08:00.633 4486.695 - 4511.902: 0.0949% ( 3) 00:08:00.633 4511.902 - 4537.108: 0.1060% ( 2) 00:08:00.633 4537.108 - 4562.314: 0.1228% ( 3) 00:08:00.633 4562.314 - 4587.520: 0.1339% ( 2) 00:08:00.633 4587.520 - 4612.726: 0.1451% ( 2) 00:08:00.633 4612.726 - 4637.932: 0.1618% ( 3) 00:08:00.633 4637.932 - 4663.138: 0.1730% ( 2) 00:08:00.633 4663.138 - 4688.345: 0.1842% ( 2) 00:08:00.633 4688.345 - 4713.551: 0.2009% ( 3) 00:08:00.633 4713.551 - 4738.757: 0.2121% ( 2) 00:08:00.633 4738.757 - 4763.963: 0.2288% ( 3) 00:08:00.633 4763.963 - 4789.169: 0.2400% ( 2) 00:08:00.633 4789.169 - 4814.375: 0.2511% ( 2) 00:08:00.633 4814.375 - 4839.582: 0.2679% ( 3) 00:08:00.633 4839.582 - 4864.788: 0.2790% ( 2) 00:08:00.633 4864.788 - 4889.994: 0.2902% ( 2) 00:08:00.633 4889.994 - 4915.200: 0.3069% ( 3) 00:08:00.633 4915.200 - 4940.406: 0.3181% ( 2) 00:08:00.633 4940.406 - 4965.612: 0.3348% ( 3) 00:08:00.633 4965.612 - 4990.818: 0.3460% ( 2) 00:08:00.633 4990.818 - 5016.025: 0.3571% ( 2) 00:08:00.633 5747.003 - 5772.209: 0.4297% ( 13) 00:08:00.633 5772.209 - 5797.415: 0.5636% ( 24) 00:08:00.633 5797.415 - 5822.622: 0.7366% ( 31) 00:08:00.633 5822.622 - 5847.828: 1.0938% ( 64) 00:08:00.633 5847.828 - 5873.034: 1.6071% ( 92) 00:08:00.633 5873.034 - 5898.240: 2.4051% ( 143) 00:08:00.633 5898.240 - 5923.446: 3.4487% ( 187) 00:08:00.633 5923.446 - 5948.652: 4.8047% ( 243) 00:08:00.633 5948.652 - 5973.858: 6.5513% ( 313) 00:08:00.633 5973.858 - 5999.065: 8.3371% ( 320) 00:08:00.633 5999.065 - 6024.271: 10.0335% ( 304) 00:08:00.633 6024.271 - 6049.477: 11.8415% ( 324) 00:08:00.633 6049.477 - 6074.683: 14.0904% ( 403) 00:08:00.633 6074.683 - 6099.889: 16.3225% ( 400) 00:08:00.633 6099.889 - 6125.095: 18.4542% ( 382) 00:08:00.633 6125.095 - 6150.302: 20.3683% ( 343) 00:08:00.633 6150.302 - 6175.508: 22.5558% ( 392) 00:08:00.633 6175.508 - 6200.714: 24.6652% ( 378) 00:08:00.633 6200.714 - 6225.920: 26.7969% ( 382) 00:08:00.633 6225.920 - 6251.126: 28.9900% ( 393) 00:08:00.633 6251.126 - 6276.332: 31.1663% ( 390) 00:08:00.633 6276.332 - 6301.538: 33.3929% ( 399) 00:08:00.633 6301.538 - 6326.745: 35.5971% ( 395) 00:08:00.633 6326.745 - 6351.951: 37.8571% ( 405) 00:08:00.633 6351.951 - 6377.157: 40.1786% ( 416) 00:08:00.633 6377.157 - 6402.363: 42.3772% ( 394) 00:08:00.633 6402.363 - 6427.569: 44.6540% ( 408) 00:08:00.633 6427.569 - 6452.775: 46.8917% ( 401) 00:08:00.633 6452.775 - 6503.188: 51.4565% ( 818) 00:08:00.633 6503.188 - 6553.600: 56.0324% ( 820) 00:08:00.633 6553.600 - 6604.012: 60.5859% ( 816) 00:08:00.633 6604.012 - 6654.425: 64.9498% ( 782) 00:08:00.633 6654.425 - 6704.837: 68.8058% ( 691) 00:08:00.633 6704.837 - 6755.249: 71.9475% ( 563) 00:08:00.633 6755.249 - 6805.662: 74.2132% ( 406) 00:08:00.633 6805.662 - 6856.074: 75.7980% ( 284) 00:08:00.633 6856.074 - 6906.486: 77.0368% ( 222) 00:08:00.633 6906.486 - 6956.898: 77.9129% ( 157) 00:08:00.633 6956.898 - 7007.311: 78.5212% ( 109) 00:08:00.633 7007.311 - 7057.723: 78.9844% ( 83) 00:08:00.633 7057.723 - 7108.135: 79.4252% ( 79) 00:08:00.633 7108.135 - 7158.548: 79.8661% ( 79) 00:08:00.633 7158.548 - 7208.960: 80.2902% ( 76) 00:08:00.633 7208.960 - 7259.372: 80.6585% ( 66) 00:08:00.633 7259.372 - 7309.785: 81.0268% ( 66) 00:08:00.633 7309.785 - 7360.197: 81.3337% ( 55) 00:08:00.633 7360.197 - 7410.609: 81.6518% ( 57) 00:08:00.633 7410.609 - 7461.022: 81.9196% ( 48) 00:08:00.633 7461.022 - 7511.434: 82.1875% ( 48) 00:08:00.633 7511.434 - 7561.846: 82.4554% ( 48) 00:08:00.633 7561.846 - 7612.258: 82.7344% ( 50) 00:08:00.633 7612.258 - 7662.671: 82.9632% ( 41) 00:08:00.633 7662.671 - 7713.083: 83.1752% ( 38) 00:08:00.633 7713.083 - 7763.495: 83.3761% ( 36) 00:08:00.633 7763.495 - 7813.908: 83.5547% ( 32) 00:08:00.633 7813.908 - 7864.320: 83.6886% ( 24) 00:08:00.633 7864.320 - 7914.732: 83.8225% ( 24) 00:08:00.633 7914.732 - 7965.145: 83.9509% ( 23) 00:08:00.633 7965.145 - 8015.557: 84.0737% ( 22) 00:08:00.633 8015.557 - 8065.969: 84.1853% ( 20) 00:08:00.633 8065.969 - 8116.382: 84.3025% ( 21) 00:08:00.633 8116.382 - 8166.794: 84.4364% ( 24) 00:08:00.633 8166.794 - 8217.206: 84.5647% ( 23) 00:08:00.633 8217.206 - 8267.618: 84.6875% ( 22) 00:08:00.633 8267.618 - 8318.031: 84.8326% ( 26) 00:08:00.633 8318.031 - 8368.443: 84.9721% ( 25) 00:08:00.633 8368.443 - 8418.855: 85.1116% ( 25) 00:08:00.633 8418.855 - 8469.268: 85.2400% ( 23) 00:08:00.633 8469.268 - 8519.680: 85.3627% ( 22) 00:08:00.633 8519.680 - 8570.092: 85.4799% ( 21) 00:08:00.633 8570.092 - 8620.505: 85.5971% ( 21) 00:08:00.633 8620.505 - 8670.917: 85.6920% ( 17) 00:08:00.633 8670.917 - 8721.329: 85.8147% ( 22) 00:08:00.633 8721.329 - 8771.742: 85.9375% ( 22) 00:08:00.633 8771.742 - 8822.154: 86.0603% ( 22) 00:08:00.633 8822.154 - 8872.566: 86.1886% ( 23) 00:08:00.633 8872.566 - 8922.978: 86.3170% ( 23) 00:08:00.633 8922.978 - 8973.391: 86.4453% ( 23) 00:08:00.633 8973.391 - 9023.803: 86.5848% ( 25) 00:08:00.633 9023.803 - 9074.215: 86.7355% ( 27) 00:08:00.633 9074.215 - 9124.628: 86.9308% ( 35) 00:08:00.633 9124.628 - 9175.040: 87.1038% ( 31) 00:08:00.633 9175.040 - 9225.452: 87.2824% ( 32) 00:08:00.633 9225.452 - 9275.865: 87.4665% ( 33) 00:08:00.633 9275.865 - 9326.277: 87.6786% ( 38) 00:08:00.633 9326.277 - 9376.689: 87.8627% ( 33) 00:08:00.633 9376.689 - 9427.102: 88.0469% ( 33) 00:08:00.633 9427.102 - 9477.514: 88.2533% ( 37) 00:08:00.633 9477.514 - 9527.926: 88.4654% ( 38) 00:08:00.633 9527.926 - 9578.338: 88.6942% ( 41) 00:08:00.633 9578.338 - 9628.751: 88.9118% ( 39) 00:08:00.633 9628.751 - 9679.163: 89.1741% ( 47) 00:08:00.633 9679.163 - 9729.575: 89.4252% ( 45) 00:08:00.633 9729.575 - 9779.988: 89.6596% ( 42) 00:08:00.633 9779.988 - 9830.400: 89.8996% ( 43) 00:08:00.633 9830.400 - 9880.812: 90.1339% ( 42) 00:08:00.633 9880.812 - 9931.225: 90.3237% ( 34) 00:08:00.633 9931.225 - 9981.637: 90.4911% ( 30) 00:08:00.633 9981.637 - 10032.049: 90.6585% ( 30) 00:08:00.633 10032.049 - 10082.462: 90.8482% ( 34) 00:08:00.633 10082.462 - 10132.874: 91.0212% ( 31) 00:08:00.634 10132.874 - 10183.286: 91.1998% ( 32) 00:08:00.634 10183.286 - 10233.698: 91.3895% ( 34) 00:08:00.634 10233.698 - 10284.111: 91.5681% ( 32) 00:08:00.634 10284.111 - 10334.523: 91.7467% ( 32) 00:08:00.634 10334.523 - 10384.935: 91.9085% ( 29) 00:08:00.634 10384.935 - 10435.348: 92.0647% ( 28) 00:08:00.634 10435.348 - 10485.760: 92.2377% ( 31) 00:08:00.634 10485.760 - 10536.172: 92.4051% ( 30) 00:08:00.634 10536.172 - 10586.585: 92.5725% ( 30) 00:08:00.634 10586.585 - 10636.997: 92.7567% ( 33) 00:08:00.634 10636.997 - 10687.409: 92.9967% ( 43) 00:08:00.634 10687.409 - 10737.822: 93.2422% ( 44) 00:08:00.634 10737.822 - 10788.234: 93.5324% ( 52) 00:08:00.634 10788.234 - 10838.646: 93.7444% ( 38) 00:08:00.634 10838.646 - 10889.058: 93.9844% ( 43) 00:08:00.634 10889.058 - 10939.471: 94.2522% ( 48) 00:08:00.634 10939.471 - 10989.883: 94.5145% ( 47) 00:08:00.634 10989.883 - 11040.295: 94.7433% ( 41) 00:08:00.634 11040.295 - 11090.708: 94.9721% ( 41) 00:08:00.634 11090.708 - 11141.120: 95.1786% ( 37) 00:08:00.634 11141.120 - 11191.532: 95.3850% ( 37) 00:08:00.634 11191.532 - 11241.945: 95.5580% ( 31) 00:08:00.634 11241.945 - 11292.357: 95.7254% ( 30) 00:08:00.634 11292.357 - 11342.769: 95.8984% ( 31) 00:08:00.634 11342.769 - 11393.182: 96.0379% ( 25) 00:08:00.634 11393.182 - 11443.594: 96.1775% ( 25) 00:08:00.634 11443.594 - 11494.006: 96.2835% ( 19) 00:08:00.634 11494.006 - 11544.418: 96.3504% ( 12) 00:08:00.634 11544.418 - 11594.831: 96.4286% ( 14) 00:08:00.634 11594.831 - 11645.243: 96.5123% ( 15) 00:08:00.634 11645.243 - 11695.655: 96.5848% ( 13) 00:08:00.634 11695.655 - 11746.068: 96.6629% ( 14) 00:08:00.634 11746.068 - 11796.480: 96.7299% ( 12) 00:08:00.634 11796.480 - 11846.892: 96.7969% ( 12) 00:08:00.634 11846.892 - 11897.305: 96.8694% ( 13) 00:08:00.634 11897.305 - 11947.717: 96.9420% ( 13) 00:08:00.634 11947.717 - 11998.129: 97.0089% ( 12) 00:08:00.634 11998.129 - 12048.542: 97.0982% ( 16) 00:08:00.634 12048.542 - 12098.954: 97.1819% ( 15) 00:08:00.634 12098.954 - 12149.366: 97.2712% ( 16) 00:08:00.634 12149.366 - 12199.778: 97.3661% ( 17) 00:08:00.634 12199.778 - 12250.191: 97.4665% ( 18) 00:08:00.634 12250.191 - 12300.603: 97.5670% ( 18) 00:08:00.634 12300.603 - 12351.015: 97.6842% ( 21) 00:08:00.634 12351.015 - 12401.428: 97.7902% ( 19) 00:08:00.634 12401.428 - 12451.840: 97.8850% ( 17) 00:08:00.634 12451.840 - 12502.252: 97.9464% ( 11) 00:08:00.634 12502.252 - 12552.665: 98.0301% ( 15) 00:08:00.634 12552.665 - 12603.077: 98.0971% ( 12) 00:08:00.634 12603.077 - 12653.489: 98.1808% ( 15) 00:08:00.634 12653.489 - 12703.902: 98.2589% ( 14) 00:08:00.634 12703.902 - 12754.314: 98.3315% ( 13) 00:08:00.634 12754.314 - 12804.726: 98.4096% ( 14) 00:08:00.634 12804.726 - 12855.138: 98.4710% ( 11) 00:08:00.634 12855.138 - 12905.551: 98.5379% ( 12) 00:08:00.634 12905.551 - 13006.375: 98.6217% ( 15) 00:08:00.634 13006.375 - 13107.200: 98.7054% ( 15) 00:08:00.634 13107.200 - 13208.025: 98.7835% ( 14) 00:08:00.634 13208.025 - 13308.849: 98.8337% ( 9) 00:08:00.634 13308.849 - 13409.674: 98.8560% ( 4) 00:08:00.634 13409.674 - 13510.498: 98.8839% ( 5) 00:08:00.634 13510.498 - 13611.323: 98.9286% ( 8) 00:08:00.634 13611.323 - 13712.148: 98.9788% ( 9) 00:08:00.634 13712.148 - 13812.972: 99.0067% ( 5) 00:08:00.634 13812.972 - 13913.797: 99.0234% ( 3) 00:08:00.634 13913.797 - 14014.622: 99.0402% ( 3) 00:08:00.634 14014.622 - 14115.446: 99.0625% ( 4) 00:08:00.634 14115.446 - 14216.271: 99.0848% ( 4) 00:08:00.634 14216.271 - 14317.095: 99.1071% ( 4) 00:08:00.634 14317.095 - 14417.920: 99.1295% ( 4) 00:08:00.634 14417.920 - 14518.745: 99.1518% ( 4) 00:08:00.634 14518.745 - 14619.569: 99.1741% ( 4) 00:08:00.634 14619.569 - 14720.394: 99.1964% ( 4) 00:08:00.634 14720.394 - 14821.218: 99.2188% ( 4) 00:08:00.634 14821.218 - 14922.043: 99.2411% ( 4) 00:08:00.634 14922.043 - 15022.868: 99.2634% ( 4) 00:08:00.634 15022.868 - 15123.692: 99.2857% ( 4) 00:08:00.634 15829.465 - 15930.289: 99.2969% ( 2) 00:08:00.634 15930.289 - 16031.114: 99.3136% ( 3) 00:08:00.634 16031.114 - 16131.938: 99.3359% ( 4) 00:08:00.634 16131.938 - 16232.763: 99.3583% ( 4) 00:08:00.634 16232.763 - 16333.588: 99.3806% ( 4) 00:08:00.634 16333.588 - 16434.412: 99.3973% ( 3) 00:08:00.634 16434.412 - 16535.237: 99.4196% ( 4) 00:08:00.634 16535.237 - 16636.062: 99.4420% ( 4) 00:08:00.634 16636.062 - 16736.886: 99.4643% ( 4) 00:08:00.634 16736.886 - 16837.711: 99.4810% ( 3) 00:08:00.634 16837.711 - 16938.535: 99.4978% ( 3) 00:08:00.634 16938.535 - 17039.360: 99.5201% ( 4) 00:08:00.634 17039.360 - 17140.185: 99.5424% ( 4) 00:08:00.634 17140.185 - 17241.009: 99.5647% ( 4) 00:08:00.634 17241.009 - 17341.834: 99.5815% ( 3) 00:08:00.634 17341.834 - 17442.658: 99.6038% ( 4) 00:08:00.634 17442.658 - 17543.483: 99.6261% ( 4) 00:08:00.634 17543.483 - 17644.308: 99.6429% ( 3) 00:08:00.634 24097.083 - 24197.908: 99.6540% ( 2) 00:08:00.634 24197.908 - 24298.732: 99.6708% ( 3) 00:08:00.634 24298.732 - 24399.557: 99.6931% ( 4) 00:08:00.634 24399.557 - 24500.382: 99.7154% ( 4) 00:08:00.634 24500.382 - 24601.206: 99.7321% ( 3) 00:08:00.634 24601.206 - 24702.031: 99.7545% ( 4) 00:08:00.634 24702.031 - 24802.855: 99.7768% ( 4) 00:08:00.634 24802.855 - 24903.680: 99.7991% ( 4) 00:08:00.634 24903.680 - 25004.505: 99.8214% ( 4) 00:08:00.634 25004.505 - 25105.329: 99.8382% ( 3) 00:08:00.634 25105.329 - 25206.154: 99.8605% ( 4) 00:08:00.634 25206.154 - 25306.978: 99.8828% ( 4) 00:08:00.634 25306.978 - 25407.803: 99.9051% ( 4) 00:08:00.634 25407.803 - 25508.628: 99.9275% ( 4) 00:08:00.634 25508.628 - 25609.452: 99.9442% ( 3) 00:08:00.634 25609.452 - 25710.277: 99.9665% ( 4) 00:08:00.634 25710.277 - 25811.102: 99.9888% ( 4) 00:08:00.634 25811.102 - 26012.751: 100.0000% ( 2) 00:08:00.634 00:08:00.634 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:00.634 ============================================================================== 00:08:00.634 Range in us Cumulative IO count 00:08:00.634 3856.542 - 3881.748: 0.0056% ( 1) 00:08:00.634 3881.748 - 3906.954: 0.0446% ( 7) 00:08:00.634 3932.160 - 3957.366: 0.0558% ( 2) 00:08:00.634 3957.366 - 3982.572: 0.0725% ( 3) 00:08:00.634 3982.572 - 4007.778: 0.0837% ( 2) 00:08:00.634 4007.778 - 4032.985: 0.0949% ( 2) 00:08:00.634 4032.985 - 4058.191: 0.1116% ( 3) 00:08:00.634 4058.191 - 4083.397: 0.1228% ( 2) 00:08:00.634 4083.397 - 4108.603: 0.1339% ( 2) 00:08:00.634 4108.603 - 4133.809: 0.1507% ( 3) 00:08:00.634 4133.809 - 4159.015: 0.1618% ( 2) 00:08:00.634 4159.015 - 4184.222: 0.1786% ( 3) 00:08:00.634 4184.222 - 4209.428: 0.1897% ( 2) 00:08:00.634 4209.428 - 4234.634: 0.2009% ( 2) 00:08:00.634 4234.634 - 4259.840: 0.2176% ( 3) 00:08:00.634 4259.840 - 4285.046: 0.2288% ( 2) 00:08:00.634 4285.046 - 4310.252: 0.2400% ( 2) 00:08:00.634 4310.252 - 4335.458: 0.2567% ( 3) 00:08:00.634 4335.458 - 4360.665: 0.2679% ( 2) 00:08:00.634 4360.665 - 4385.871: 0.2846% ( 3) 00:08:00.634 4385.871 - 4411.077: 0.2958% ( 2) 00:08:00.634 4411.077 - 4436.283: 0.3069% ( 2) 00:08:00.634 4436.283 - 4461.489: 0.3237% ( 3) 00:08:00.634 4461.489 - 4486.695: 0.3348% ( 2) 00:08:00.634 4486.695 - 4511.902: 0.3460% ( 2) 00:08:00.634 4511.902 - 4537.108: 0.3571% ( 2) 00:08:00.634 5293.292 - 5318.498: 0.3683% ( 2) 00:08:00.634 5318.498 - 5343.705: 0.3795% ( 2) 00:08:00.634 5343.705 - 5368.911: 0.3850% ( 1) 00:08:00.634 5368.911 - 5394.117: 0.4018% ( 3) 00:08:00.634 5394.117 - 5419.323: 0.4129% ( 2) 00:08:00.634 5419.323 - 5444.529: 0.4297% ( 3) 00:08:00.634 5444.529 - 5469.735: 0.4464% ( 3) 00:08:00.634 5469.735 - 5494.942: 0.4576% ( 2) 00:08:00.634 5494.942 - 5520.148: 0.4743% ( 3) 00:08:00.634 5520.148 - 5545.354: 0.4855% ( 2) 00:08:00.634 5545.354 - 5570.560: 0.5022% ( 3) 00:08:00.634 5570.560 - 5595.766: 0.5134% ( 2) 00:08:00.634 5595.766 - 5620.972: 0.5301% ( 3) 00:08:00.634 5620.972 - 5646.178: 0.5413% ( 2) 00:08:00.634 5646.178 - 5671.385: 0.5525% ( 2) 00:08:00.634 5671.385 - 5696.591: 0.5692% ( 3) 00:08:00.634 5696.591 - 5721.797: 0.5804% ( 2) 00:08:00.634 5721.797 - 5747.003: 0.5915% ( 2) 00:08:00.634 5747.003 - 5772.209: 0.6083% ( 3) 00:08:00.634 5772.209 - 5797.415: 0.6473% ( 7) 00:08:00.634 5797.415 - 5822.622: 0.8315% ( 33) 00:08:00.634 5822.622 - 5847.828: 1.0882% ( 46) 00:08:00.634 5847.828 - 5873.034: 1.7578% ( 120) 00:08:00.634 5873.034 - 5898.240: 2.7232% ( 173) 00:08:00.634 5898.240 - 5923.446: 3.8616% ( 204) 00:08:00.634 5923.446 - 5948.652: 5.0167% ( 207) 00:08:00.634 5948.652 - 5973.858: 6.4844% ( 263) 00:08:00.634 5973.858 - 5999.065: 8.2478% ( 316) 00:08:00.634 5999.065 - 6024.271: 10.2009% ( 350) 00:08:00.634 6024.271 - 6049.477: 12.1540% ( 350) 00:08:00.634 6049.477 - 6074.683: 14.1462% ( 357) 00:08:00.634 6074.683 - 6099.889: 16.3728% ( 399) 00:08:00.634 6099.889 - 6125.095: 18.5156% ( 384) 00:08:00.634 6125.095 - 6150.302: 20.6083% ( 375) 00:08:00.634 6150.302 - 6175.508: 22.7623% ( 386) 00:08:00.634 6175.508 - 6200.714: 24.9386% ( 390) 00:08:00.634 6200.714 - 6225.920: 27.0703% ( 382) 00:08:00.634 6225.920 - 6251.126: 29.1183% ( 367) 00:08:00.634 6251.126 - 6276.332: 31.2333% ( 379) 00:08:00.634 6276.332 - 6301.538: 33.4152% ( 391) 00:08:00.634 6301.538 - 6326.745: 35.6975% ( 409) 00:08:00.634 6326.745 - 6351.951: 38.0636% ( 424) 00:08:00.634 6351.951 - 6377.157: 40.3627% ( 412) 00:08:00.634 6377.157 - 6402.363: 42.5670% ( 395) 00:08:00.635 6402.363 - 6427.569: 44.7935% ( 399) 00:08:00.635 6427.569 - 6452.775: 47.0424% ( 403) 00:08:00.635 6452.775 - 6503.188: 51.5737% ( 812) 00:08:00.635 6503.188 - 6553.600: 56.1328% ( 817) 00:08:00.635 6553.600 - 6604.012: 60.7366% ( 825) 00:08:00.635 6604.012 - 6654.425: 65.2232% ( 804) 00:08:00.635 6654.425 - 6704.837: 69.2690% ( 725) 00:08:00.635 6704.837 - 6755.249: 72.4944% ( 578) 00:08:00.635 6755.249 - 6805.662: 74.9219% ( 435) 00:08:00.635 6805.662 - 6856.074: 76.5737% ( 296) 00:08:00.635 6856.074 - 6906.486: 77.7455% ( 210) 00:08:00.635 6906.486 - 6956.898: 78.6105% ( 155) 00:08:00.635 6956.898 - 7007.311: 79.1685% ( 100) 00:08:00.635 7007.311 - 7057.723: 79.6038% ( 78) 00:08:00.635 7057.723 - 7108.135: 79.9554% ( 63) 00:08:00.635 7108.135 - 7158.548: 80.3627% ( 73) 00:08:00.635 7158.548 - 7208.960: 80.6975% ( 60) 00:08:00.635 7208.960 - 7259.372: 81.0045% ( 55) 00:08:00.635 7259.372 - 7309.785: 81.3281% ( 58) 00:08:00.635 7309.785 - 7360.197: 81.6629% ( 60) 00:08:00.635 7360.197 - 7410.609: 81.9364% ( 49) 00:08:00.635 7410.609 - 7461.022: 82.2042% ( 48) 00:08:00.635 7461.022 - 7511.434: 82.4609% ( 46) 00:08:00.635 7511.434 - 7561.846: 82.6618% ( 36) 00:08:00.635 7561.846 - 7612.258: 82.8683% ( 37) 00:08:00.635 7612.258 - 7662.671: 83.0413% ( 31) 00:08:00.635 7662.671 - 7713.083: 83.2254% ( 33) 00:08:00.635 7713.083 - 7763.495: 83.3761% ( 27) 00:08:00.635 7763.495 - 7813.908: 83.5100% ( 24) 00:08:00.635 7813.908 - 7864.320: 83.6272% ( 21) 00:08:00.635 7864.320 - 7914.732: 83.7835% ( 28) 00:08:00.635 7914.732 - 7965.145: 83.9118% ( 23) 00:08:00.635 7965.145 - 8015.557: 84.0290% ( 21) 00:08:00.635 8015.557 - 8065.969: 84.1295% ( 18) 00:08:00.635 8065.969 - 8116.382: 84.2411% ( 20) 00:08:00.635 8116.382 - 8166.794: 84.3694% ( 23) 00:08:00.635 8166.794 - 8217.206: 84.4922% ( 22) 00:08:00.635 8217.206 - 8267.618: 84.6373% ( 26) 00:08:00.635 8267.618 - 8318.031: 84.7600% ( 22) 00:08:00.635 8318.031 - 8368.443: 84.8717% ( 20) 00:08:00.635 8368.443 - 8418.855: 85.0056% ( 24) 00:08:00.635 8418.855 - 8469.268: 85.1395% ( 24) 00:08:00.635 8469.268 - 8519.680: 85.2734% ( 24) 00:08:00.635 8519.680 - 8570.092: 85.3795% ( 19) 00:08:00.635 8570.092 - 8620.505: 85.4967% ( 21) 00:08:00.635 8620.505 - 8670.917: 85.6194% ( 22) 00:08:00.635 8670.917 - 8721.329: 85.7422% ( 22) 00:08:00.635 8721.329 - 8771.742: 85.9040% ( 29) 00:08:00.635 8771.742 - 8822.154: 86.0658% ( 29) 00:08:00.635 8822.154 - 8872.566: 86.2333% ( 30) 00:08:00.635 8872.566 - 8922.978: 86.3951% ( 29) 00:08:00.635 8922.978 - 8973.391: 86.5458% ( 27) 00:08:00.635 8973.391 - 9023.803: 86.6853% ( 25) 00:08:00.635 9023.803 - 9074.215: 86.8136% ( 23) 00:08:00.635 9074.215 - 9124.628: 86.9754% ( 29) 00:08:00.635 9124.628 - 9175.040: 87.1261% ( 27) 00:08:00.635 9175.040 - 9225.452: 87.2768% ( 27) 00:08:00.635 9225.452 - 9275.865: 87.4275% ( 27) 00:08:00.635 9275.865 - 9326.277: 87.5837% ( 28) 00:08:00.635 9326.277 - 9376.689: 87.7455% ( 29) 00:08:00.635 9376.689 - 9427.102: 87.9129% ( 30) 00:08:00.635 9427.102 - 9477.514: 88.0859% ( 31) 00:08:00.635 9477.514 - 9527.926: 88.2478% ( 29) 00:08:00.635 9527.926 - 9578.338: 88.4040% ( 28) 00:08:00.635 9578.338 - 9628.751: 88.5826% ( 32) 00:08:00.635 9628.751 - 9679.163: 88.7946% ( 38) 00:08:00.635 9679.163 - 9729.575: 88.9900% ( 35) 00:08:00.635 9729.575 - 9779.988: 89.1964% ( 37) 00:08:00.635 9779.988 - 9830.400: 89.3527% ( 28) 00:08:00.635 9830.400 - 9880.812: 89.5089% ( 28) 00:08:00.635 9880.812 - 9931.225: 89.6931% ( 33) 00:08:00.635 9931.225 - 9981.637: 89.9442% ( 45) 00:08:00.635 9981.637 - 10032.049: 90.1562% ( 38) 00:08:00.635 10032.049 - 10082.462: 90.3962% ( 43) 00:08:00.635 10082.462 - 10132.874: 90.6417% ( 44) 00:08:00.635 10132.874 - 10183.286: 90.9040% ( 47) 00:08:00.635 10183.286 - 10233.698: 91.1328% ( 41) 00:08:00.635 10233.698 - 10284.111: 91.3839% ( 45) 00:08:00.635 10284.111 - 10334.523: 91.6406% ( 46) 00:08:00.635 10334.523 - 10384.935: 91.9029% ( 47) 00:08:00.635 10384.935 - 10435.348: 92.1819% ( 50) 00:08:00.635 10435.348 - 10485.760: 92.4498% ( 48) 00:08:00.635 10485.760 - 10536.172: 92.6953% ( 44) 00:08:00.635 10536.172 - 10586.585: 92.9297% ( 42) 00:08:00.635 10586.585 - 10636.997: 93.1641% ( 42) 00:08:00.635 10636.997 - 10687.409: 93.4375% ( 49) 00:08:00.635 10687.409 - 10737.822: 93.6886% ( 45) 00:08:00.635 10737.822 - 10788.234: 93.9621% ( 49) 00:08:00.635 10788.234 - 10838.646: 94.2188% ( 46) 00:08:00.635 10838.646 - 10889.058: 94.4531% ( 42) 00:08:00.635 10889.058 - 10939.471: 94.6819% ( 41) 00:08:00.635 10939.471 - 10989.883: 94.8717% ( 34) 00:08:00.635 10989.883 - 11040.295: 95.0335% ( 29) 00:08:00.635 11040.295 - 11090.708: 95.2232% ( 34) 00:08:00.635 11090.708 - 11141.120: 95.3906% ( 30) 00:08:00.635 11141.120 - 11191.532: 95.5525% ( 29) 00:08:00.635 11191.532 - 11241.945: 95.7422% ( 34) 00:08:00.635 11241.945 - 11292.357: 95.8761% ( 24) 00:08:00.635 11292.357 - 11342.769: 96.0379% ( 29) 00:08:00.635 11342.769 - 11393.182: 96.1775% ( 25) 00:08:00.635 11393.182 - 11443.594: 96.3058% ( 23) 00:08:00.635 11443.594 - 11494.006: 96.4230% ( 21) 00:08:00.635 11494.006 - 11544.418: 96.5067% ( 15) 00:08:00.635 11544.418 - 11594.831: 96.5737% ( 12) 00:08:00.635 11594.831 - 11645.243: 96.6406% ( 12) 00:08:00.635 11645.243 - 11695.655: 96.6908% ( 9) 00:08:00.635 11695.655 - 11746.068: 96.7411% ( 9) 00:08:00.635 11746.068 - 11796.480: 96.8080% ( 12) 00:08:00.635 11796.480 - 11846.892: 96.8638% ( 10) 00:08:00.635 11846.892 - 11897.305: 96.9531% ( 16) 00:08:00.635 11897.305 - 11947.717: 97.0368% ( 15) 00:08:00.635 11947.717 - 11998.129: 97.1317% ( 17) 00:08:00.635 11998.129 - 12048.542: 97.2098% ( 14) 00:08:00.635 12048.542 - 12098.954: 97.2712% ( 11) 00:08:00.635 12098.954 - 12149.366: 97.3493% ( 14) 00:08:00.635 12149.366 - 12199.778: 97.4275% ( 14) 00:08:00.635 12199.778 - 12250.191: 97.5112% ( 15) 00:08:00.635 12250.191 - 12300.603: 97.6004% ( 16) 00:08:00.635 12300.603 - 12351.015: 97.6842% ( 15) 00:08:00.635 12351.015 - 12401.428: 97.7734% ( 16) 00:08:00.635 12401.428 - 12451.840: 97.8516% ( 14) 00:08:00.635 12451.840 - 12502.252: 97.9297% ( 14) 00:08:00.635 12502.252 - 12552.665: 98.0022% ( 13) 00:08:00.635 12552.665 - 12603.077: 98.0859% ( 15) 00:08:00.635 12603.077 - 12653.489: 98.1641% ( 14) 00:08:00.635 12653.489 - 12703.902: 98.2422% ( 14) 00:08:00.635 12703.902 - 12754.314: 98.3203% ( 14) 00:08:00.635 12754.314 - 12804.726: 98.4040% ( 15) 00:08:00.635 12804.726 - 12855.138: 98.4821% ( 14) 00:08:00.635 12855.138 - 12905.551: 98.5435% ( 11) 00:08:00.635 12905.551 - 13006.375: 98.6551% ( 20) 00:08:00.635 13006.375 - 13107.200: 98.7612% ( 19) 00:08:00.635 13107.200 - 13208.025: 98.8560% ( 17) 00:08:00.635 13208.025 - 13308.849: 98.9007% ( 8) 00:08:00.635 13308.849 - 13409.674: 98.9230% ( 4) 00:08:00.635 13409.674 - 13510.498: 98.9286% ( 1) 00:08:00.635 13812.972 - 13913.797: 98.9397% ( 2) 00:08:00.635 13913.797 - 14014.622: 98.9565% ( 3) 00:08:00.635 14014.622 - 14115.446: 98.9788% ( 4) 00:08:00.635 14115.446 - 14216.271: 99.0067% ( 5) 00:08:00.635 14216.271 - 14317.095: 99.0290% ( 4) 00:08:00.635 14317.095 - 14417.920: 99.0513% ( 4) 00:08:00.635 14417.920 - 14518.745: 99.0737% ( 4) 00:08:00.635 14518.745 - 14619.569: 99.0960% ( 4) 00:08:00.635 14619.569 - 14720.394: 99.1183% ( 4) 00:08:00.635 14720.394 - 14821.218: 99.1295% ( 2) 00:08:00.635 14821.218 - 14922.043: 99.1574% ( 5) 00:08:00.635 14922.043 - 15022.868: 99.1797% ( 4) 00:08:00.635 15022.868 - 15123.692: 99.2020% ( 4) 00:08:00.635 15123.692 - 15224.517: 99.2243% ( 4) 00:08:00.635 15224.517 - 15325.342: 99.2467% ( 4) 00:08:00.635 15325.342 - 15426.166: 99.2690% ( 4) 00:08:00.635 15426.166 - 15526.991: 99.2857% ( 3) 00:08:00.635 15930.289 - 16031.114: 99.2913% ( 1) 00:08:00.635 16031.114 - 16131.938: 99.3080% ( 3) 00:08:00.635 16131.938 - 16232.763: 99.3304% ( 4) 00:08:00.635 16232.763 - 16333.588: 99.3527% ( 4) 00:08:00.635 16333.588 - 16434.412: 99.3750% ( 4) 00:08:00.635 16434.412 - 16535.237: 99.3917% ( 3) 00:08:00.635 16535.237 - 16636.062: 99.4141% ( 4) 00:08:00.635 16636.062 - 16736.886: 99.4364% ( 4) 00:08:00.635 16736.886 - 16837.711: 99.4531% ( 3) 00:08:00.635 16837.711 - 16938.535: 99.4754% ( 4) 00:08:00.635 16938.535 - 17039.360: 99.4978% ( 4) 00:08:00.635 17039.360 - 17140.185: 99.5145% ( 3) 00:08:00.635 17140.185 - 17241.009: 99.5368% ( 4) 00:08:00.635 17241.009 - 17341.834: 99.5592% ( 4) 00:08:00.635 17341.834 - 17442.658: 99.5815% ( 4) 00:08:00.635 17442.658 - 17543.483: 99.6038% ( 4) 00:08:00.635 17543.483 - 17644.308: 99.6261% ( 4) 00:08:00.635 17644.308 - 17745.132: 99.6429% ( 3) 00:08:00.635 24197.908 - 24298.732: 99.6484% ( 1) 00:08:00.635 24298.732 - 24399.557: 99.6708% ( 4) 00:08:00.635 24399.557 - 24500.382: 99.6931% ( 4) 00:08:00.635 24500.382 - 24601.206: 99.7154% ( 4) 00:08:00.635 24601.206 - 24702.031: 99.7321% ( 3) 00:08:00.635 24702.031 - 24802.855: 99.7545% ( 4) 00:08:00.635 24802.855 - 24903.680: 99.7768% ( 4) 00:08:00.635 24903.680 - 25004.505: 99.7935% ( 3) 00:08:00.635 25004.505 - 25105.329: 99.8158% ( 4) 00:08:00.635 25105.329 - 25206.154: 99.8382% ( 4) 00:08:00.635 25206.154 - 25306.978: 99.8549% ( 3) 00:08:00.635 25306.978 - 25407.803: 99.8772% ( 4) 00:08:00.635 25407.803 - 25508.628: 99.8996% ( 4) 00:08:00.635 25508.628 - 25609.452: 99.9163% ( 3) 00:08:00.635 25609.452 - 25710.277: 99.9386% ( 4) 00:08:00.635 25710.277 - 25811.102: 99.9609% ( 4) 00:08:00.635 25811.102 - 26012.751: 100.0000% ( 7) 00:08:00.635 00:08:00.635 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:00.636 ============================================================================== 00:08:00.636 Range in us Cumulative IO count 00:08:00.636 3654.892 - 3680.098: 0.0391% ( 7) 00:08:00.636 3680.098 - 3705.305: 0.0446% ( 1) 00:08:00.636 3705.305 - 3730.511: 0.0558% ( 2) 00:08:00.636 3730.511 - 3755.717: 0.0614% ( 1) 00:08:00.636 3755.717 - 3780.923: 0.0837% ( 4) 00:08:00.636 3780.923 - 3806.129: 0.1004% ( 3) 00:08:00.636 3806.129 - 3831.335: 0.1116% ( 2) 00:08:00.636 3831.335 - 3856.542: 0.1172% ( 1) 00:08:00.636 3856.542 - 3881.748: 0.1283% ( 2) 00:08:00.636 3881.748 - 3906.954: 0.1451% ( 3) 00:08:00.636 3906.954 - 3932.160: 0.1562% ( 2) 00:08:00.636 3932.160 - 3957.366: 0.1674% ( 2) 00:08:00.636 3957.366 - 3982.572: 0.1842% ( 3) 00:08:00.636 3982.572 - 4007.778: 0.1953% ( 2) 00:08:00.636 4007.778 - 4032.985: 0.2065% ( 2) 00:08:00.636 4032.985 - 4058.191: 0.2232% ( 3) 00:08:00.636 4058.191 - 4083.397: 0.2344% ( 2) 00:08:00.636 4083.397 - 4108.603: 0.2455% ( 2) 00:08:00.636 4108.603 - 4133.809: 0.2567% ( 2) 00:08:00.636 4133.809 - 4159.015: 0.2679% ( 2) 00:08:00.636 4159.015 - 4184.222: 0.2846% ( 3) 00:08:00.636 4184.222 - 4209.428: 0.2958% ( 2) 00:08:00.636 4209.428 - 4234.634: 0.3069% ( 2) 00:08:00.636 4234.634 - 4259.840: 0.3237% ( 3) 00:08:00.636 4259.840 - 4285.046: 0.3348% ( 2) 00:08:00.636 4285.046 - 4310.252: 0.3516% ( 3) 00:08:00.636 4310.252 - 4335.458: 0.3571% ( 1) 00:08:00.636 5142.055 - 5167.262: 0.3683% ( 2) 00:08:00.636 5167.262 - 5192.468: 0.4074% ( 7) 00:08:00.636 5192.468 - 5217.674: 0.4408% ( 6) 00:08:00.636 5268.086 - 5293.292: 0.4520% ( 2) 00:08:00.636 5293.292 - 5318.498: 0.4688% ( 3) 00:08:00.636 5318.498 - 5343.705: 0.4799% ( 2) 00:08:00.636 5343.705 - 5368.911: 0.4911% ( 2) 00:08:00.636 5368.911 - 5394.117: 0.5022% ( 2) 00:08:00.636 5394.117 - 5419.323: 0.5190% ( 3) 00:08:00.636 5419.323 - 5444.529: 0.5301% ( 2) 00:08:00.636 5444.529 - 5469.735: 0.5413% ( 2) 00:08:00.636 5469.735 - 5494.942: 0.5580% ( 3) 00:08:00.636 5494.942 - 5520.148: 0.5692% ( 2) 00:08:00.636 5520.148 - 5545.354: 0.5804% ( 2) 00:08:00.636 5545.354 - 5570.560: 0.5971% ( 3) 00:08:00.636 5570.560 - 5595.766: 0.6083% ( 2) 00:08:00.636 5595.766 - 5620.972: 0.6250% ( 3) 00:08:00.636 5620.972 - 5646.178: 0.6362% ( 2) 00:08:00.636 5646.178 - 5671.385: 0.6473% ( 2) 00:08:00.636 5671.385 - 5696.591: 0.6641% ( 3) 00:08:00.636 5696.591 - 5721.797: 0.6752% ( 2) 00:08:00.636 5721.797 - 5747.003: 0.6920% ( 3) 00:08:00.636 5747.003 - 5772.209: 0.7087% ( 3) 00:08:00.636 5772.209 - 5797.415: 0.8426% ( 24) 00:08:00.636 5797.415 - 5822.622: 1.0324% ( 34) 00:08:00.636 5822.622 - 5847.828: 1.3672% ( 60) 00:08:00.636 5847.828 - 5873.034: 1.9085% ( 97) 00:08:00.636 5873.034 - 5898.240: 2.6116% ( 126) 00:08:00.636 5898.240 - 5923.446: 3.7444% ( 203) 00:08:00.636 5923.446 - 5948.652: 5.1953% ( 260) 00:08:00.636 5948.652 - 5973.858: 6.7355% ( 276) 00:08:00.636 5973.858 - 5999.065: 8.4431% ( 306) 00:08:00.636 5999.065 - 6024.271: 10.2734% ( 328) 00:08:00.636 6024.271 - 6049.477: 12.1931% ( 344) 00:08:00.636 6049.477 - 6074.683: 14.1908% ( 358) 00:08:00.636 6074.683 - 6099.889: 16.1551% ( 352) 00:08:00.636 6099.889 - 6125.095: 18.3984% ( 402) 00:08:00.636 6125.095 - 6150.302: 20.5022% ( 377) 00:08:00.636 6150.302 - 6175.508: 22.5837% ( 373) 00:08:00.636 6175.508 - 6200.714: 24.8047% ( 398) 00:08:00.636 6200.714 - 6225.920: 27.0368% ( 400) 00:08:00.636 6225.920 - 6251.126: 29.1685% ( 382) 00:08:00.636 6251.126 - 6276.332: 31.3337% ( 388) 00:08:00.636 6276.332 - 6301.538: 33.4877% ( 386) 00:08:00.636 6301.538 - 6326.745: 35.6027% ( 379) 00:08:00.636 6326.745 - 6351.951: 37.7902% ( 392) 00:08:00.636 6351.951 - 6377.157: 39.9888% ( 394) 00:08:00.636 6377.157 - 6402.363: 42.1373% ( 385) 00:08:00.636 6402.363 - 6427.569: 44.4308% ( 411) 00:08:00.636 6427.569 - 6452.775: 46.6685% ( 401) 00:08:00.636 6452.775 - 6503.188: 51.1440% ( 802) 00:08:00.636 6503.188 - 6553.600: 55.8426% ( 842) 00:08:00.636 6553.600 - 6604.012: 60.4743% ( 830) 00:08:00.636 6604.012 - 6654.425: 64.8996% ( 793) 00:08:00.636 6654.425 - 6704.837: 68.8560% ( 709) 00:08:00.636 6704.837 - 6755.249: 72.0647% ( 575) 00:08:00.636 6755.249 - 6805.662: 74.5089% ( 438) 00:08:00.636 6805.662 - 6856.074: 76.1049% ( 286) 00:08:00.636 6856.074 - 6906.486: 77.2991% ( 214) 00:08:00.636 6906.486 - 6956.898: 78.2422% ( 169) 00:08:00.636 6956.898 - 7007.311: 78.8560% ( 110) 00:08:00.636 7007.311 - 7057.723: 79.3527% ( 89) 00:08:00.636 7057.723 - 7108.135: 79.7768% ( 76) 00:08:00.636 7108.135 - 7158.548: 80.2121% ( 78) 00:08:00.636 7158.548 - 7208.960: 80.5971% ( 69) 00:08:00.636 7208.960 - 7259.372: 81.0100% ( 74) 00:08:00.636 7259.372 - 7309.785: 81.4900% ( 86) 00:08:00.636 7309.785 - 7360.197: 81.9029% ( 74) 00:08:00.636 7360.197 - 7410.609: 82.2266% ( 58) 00:08:00.636 7410.609 - 7461.022: 82.4665% ( 43) 00:08:00.636 7461.022 - 7511.434: 82.6842% ( 39) 00:08:00.636 7511.434 - 7561.846: 82.8850% ( 36) 00:08:00.636 7561.846 - 7612.258: 83.0971% ( 38) 00:08:00.636 7612.258 - 7662.671: 83.2868% ( 34) 00:08:00.636 7662.671 - 7713.083: 83.4431% ( 28) 00:08:00.636 7713.083 - 7763.495: 83.6161% ( 31) 00:08:00.636 7763.495 - 7813.908: 83.7556% ( 25) 00:08:00.636 7813.908 - 7864.320: 83.9286% ( 31) 00:08:00.636 7864.320 - 7914.732: 84.1127% ( 33) 00:08:00.636 7914.732 - 7965.145: 84.2355% ( 22) 00:08:00.636 7965.145 - 8015.557: 84.3973% ( 29) 00:08:00.636 8015.557 - 8065.969: 84.5257% ( 23) 00:08:00.636 8065.969 - 8116.382: 84.6261% ( 18) 00:08:00.636 8116.382 - 8166.794: 84.7600% ( 24) 00:08:00.636 8166.794 - 8217.206: 84.8605% ( 18) 00:08:00.636 8217.206 - 8267.618: 84.9609% ( 18) 00:08:00.636 8267.618 - 8318.031: 85.0558% ( 17) 00:08:00.636 8318.031 - 8368.443: 85.1618% ( 19) 00:08:00.636 8368.443 - 8418.855: 85.2679% ( 19) 00:08:00.636 8418.855 - 8469.268: 85.4297% ( 29) 00:08:00.636 8469.268 - 8519.680: 85.5636% ( 24) 00:08:00.636 8519.680 - 8570.092: 85.6752% ( 20) 00:08:00.636 8570.092 - 8620.505: 85.8315% ( 28) 00:08:00.636 8620.505 - 8670.917: 85.9710% ( 25) 00:08:00.636 8670.917 - 8721.329: 86.0826% ( 20) 00:08:00.636 8721.329 - 8771.742: 86.2109% ( 23) 00:08:00.636 8771.742 - 8822.154: 86.3728% ( 29) 00:08:00.636 8822.154 - 8872.566: 86.5513% ( 32) 00:08:00.636 8872.566 - 8922.978: 86.7132% ( 29) 00:08:00.636 8922.978 - 8973.391: 86.8638% ( 27) 00:08:00.636 8973.391 - 9023.803: 87.0089% ( 26) 00:08:00.636 9023.803 - 9074.215: 87.1205% ( 20) 00:08:00.636 9074.215 - 9124.628: 87.2433% ( 22) 00:08:00.636 9124.628 - 9175.040: 87.3605% ( 21) 00:08:00.636 9175.040 - 9225.452: 87.5446% ( 33) 00:08:00.636 9225.452 - 9275.865: 87.6674% ( 22) 00:08:00.636 9275.865 - 9326.277: 87.7958% ( 23) 00:08:00.636 9326.277 - 9376.689: 87.9576% ( 29) 00:08:00.636 9376.689 - 9427.102: 88.1975% ( 43) 00:08:00.636 9427.102 - 9477.514: 88.4431% ( 44) 00:08:00.636 9477.514 - 9527.926: 88.6384% ( 35) 00:08:00.636 9527.926 - 9578.338: 88.8225% ( 33) 00:08:00.636 9578.338 - 9628.751: 89.0067% ( 33) 00:08:00.636 9628.751 - 9679.163: 89.1741% ( 30) 00:08:00.636 9679.163 - 9729.575: 89.3527% ( 32) 00:08:00.636 9729.575 - 9779.988: 89.5368% ( 33) 00:08:00.636 9779.988 - 9830.400: 89.6931% ( 28) 00:08:00.636 9830.400 - 9880.812: 89.8884% ( 35) 00:08:00.636 9880.812 - 9931.225: 90.0558% ( 30) 00:08:00.636 9931.225 - 9981.637: 90.2176% ( 29) 00:08:00.636 9981.637 - 10032.049: 90.3906% ( 31) 00:08:00.636 10032.049 - 10082.462: 90.5580% ( 30) 00:08:00.636 10082.462 - 10132.874: 90.7533% ( 35) 00:08:00.636 10132.874 - 10183.286: 90.9989% ( 44) 00:08:00.636 10183.286 - 10233.698: 91.1607% ( 29) 00:08:00.636 10233.698 - 10284.111: 91.3393% ( 32) 00:08:00.636 10284.111 - 10334.523: 91.5513% ( 38) 00:08:00.636 10334.523 - 10384.935: 91.7411% ( 34) 00:08:00.637 10384.935 - 10435.348: 92.0089% ( 48) 00:08:00.637 10435.348 - 10485.760: 92.2656% ( 46) 00:08:00.637 10485.760 - 10536.172: 92.4665% ( 36) 00:08:00.637 10536.172 - 10586.585: 92.6786% ( 38) 00:08:00.637 10586.585 - 10636.997: 92.8962% ( 39) 00:08:00.637 10636.997 - 10687.409: 93.0859% ( 34) 00:08:00.637 10687.409 - 10737.822: 93.2812% ( 35) 00:08:00.637 10737.822 - 10788.234: 93.5379% ( 46) 00:08:00.637 10788.234 - 10838.646: 93.8170% ( 50) 00:08:00.637 10838.646 - 10889.058: 94.0402% ( 40) 00:08:00.637 10889.058 - 10939.471: 94.2355% ( 35) 00:08:00.637 10939.471 - 10989.883: 94.4587% ( 40) 00:08:00.637 10989.883 - 11040.295: 94.6819% ( 40) 00:08:00.637 11040.295 - 11090.708: 94.9051% ( 40) 00:08:00.637 11090.708 - 11141.120: 95.1172% ( 38) 00:08:00.637 11141.120 - 11191.532: 95.3013% ( 33) 00:08:00.637 11191.532 - 11241.945: 95.4799% ( 32) 00:08:00.637 11241.945 - 11292.357: 95.6362% ( 28) 00:08:00.637 11292.357 - 11342.769: 95.7924% ( 28) 00:08:00.637 11342.769 - 11393.182: 95.9319% ( 25) 00:08:00.637 11393.182 - 11443.594: 96.0658% ( 24) 00:08:00.637 11443.594 - 11494.006: 96.1775% ( 20) 00:08:00.637 11494.006 - 11544.418: 96.2835% ( 19) 00:08:00.637 11544.418 - 11594.831: 96.3783% ( 17) 00:08:00.637 11594.831 - 11645.243: 96.5179% ( 25) 00:08:00.637 11645.243 - 11695.655: 96.6406% ( 22) 00:08:00.637 11695.655 - 11746.068: 96.7746% ( 24) 00:08:00.637 11746.068 - 11796.480: 96.8917% ( 21) 00:08:00.637 11796.480 - 11846.892: 97.0312% ( 25) 00:08:00.637 11846.892 - 11897.305: 97.1931% ( 29) 00:08:00.637 11897.305 - 11947.717: 97.3047% ( 20) 00:08:00.637 11947.717 - 11998.129: 97.4051% ( 18) 00:08:00.637 11998.129 - 12048.542: 97.4944% ( 16) 00:08:00.637 12048.542 - 12098.954: 97.5837% ( 16) 00:08:00.637 12098.954 - 12149.366: 97.6674% ( 15) 00:08:00.637 12149.366 - 12199.778: 97.7623% ( 17) 00:08:00.637 12199.778 - 12250.191: 97.8404% ( 14) 00:08:00.637 12250.191 - 12300.603: 97.9408% ( 18) 00:08:00.637 12300.603 - 12351.015: 98.0301% ( 16) 00:08:00.637 12351.015 - 12401.428: 98.1138% ( 15) 00:08:00.637 12401.428 - 12451.840: 98.2087% ( 17) 00:08:00.637 12451.840 - 12502.252: 98.2868% ( 14) 00:08:00.637 12502.252 - 12552.665: 98.3594% ( 13) 00:08:00.637 12552.665 - 12603.077: 98.4208% ( 11) 00:08:00.637 12603.077 - 12653.489: 98.4933% ( 13) 00:08:00.637 12653.489 - 12703.902: 98.5547% ( 11) 00:08:00.637 12703.902 - 12754.314: 98.6105% ( 10) 00:08:00.637 12754.314 - 12804.726: 98.6440% ( 6) 00:08:00.637 12804.726 - 12855.138: 98.6775% ( 6) 00:08:00.637 12855.138 - 12905.551: 98.7054% ( 5) 00:08:00.637 12905.551 - 13006.375: 98.7667% ( 11) 00:08:00.637 13006.375 - 13107.200: 98.8225% ( 10) 00:08:00.637 13107.200 - 13208.025: 98.8895% ( 12) 00:08:00.637 13208.025 - 13308.849: 98.9286% ( 7) 00:08:00.637 14216.271 - 14317.095: 98.9397% ( 2) 00:08:00.637 14317.095 - 14417.920: 98.9676% ( 5) 00:08:00.637 14417.920 - 14518.745: 99.0067% ( 7) 00:08:00.637 14518.745 - 14619.569: 99.0681% ( 11) 00:08:00.637 14619.569 - 14720.394: 99.1071% ( 7) 00:08:00.637 14720.394 - 14821.218: 99.1406% ( 6) 00:08:00.637 14821.218 - 14922.043: 99.1853% ( 8) 00:08:00.637 14922.043 - 15022.868: 99.2243% ( 7) 00:08:00.637 15022.868 - 15123.692: 99.2690% ( 8) 00:08:00.637 15123.692 - 15224.517: 99.2857% ( 3) 00:08:00.637 16131.938 - 16232.763: 99.3080% ( 4) 00:08:00.637 16232.763 - 16333.588: 99.3304% ( 4) 00:08:00.637 16333.588 - 16434.412: 99.3527% ( 4) 00:08:00.637 16434.412 - 16535.237: 99.3694% ( 3) 00:08:00.637 16535.237 - 16636.062: 99.3862% ( 3) 00:08:00.637 16636.062 - 16736.886: 99.4085% ( 4) 00:08:00.637 16736.886 - 16837.711: 99.4308% ( 4) 00:08:00.637 16837.711 - 16938.535: 99.4531% ( 4) 00:08:00.637 16938.535 - 17039.360: 99.4699% ( 3) 00:08:00.637 17039.360 - 17140.185: 99.4922% ( 4) 00:08:00.637 17140.185 - 17241.009: 99.5145% ( 4) 00:08:00.637 17241.009 - 17341.834: 99.5368% ( 4) 00:08:00.637 17341.834 - 17442.658: 99.5592% ( 4) 00:08:00.637 17442.658 - 17543.483: 99.5759% ( 3) 00:08:00.637 17543.483 - 17644.308: 99.5982% ( 4) 00:08:00.637 17644.308 - 17745.132: 99.6205% ( 4) 00:08:00.637 17745.132 - 17845.957: 99.6429% ( 4) 00:08:00.637 24298.732 - 24399.557: 99.6484% ( 1) 00:08:00.637 24399.557 - 24500.382: 99.6652% ( 3) 00:08:00.637 24500.382 - 24601.206: 99.6875% ( 4) 00:08:00.637 24601.206 - 24702.031: 99.7098% ( 4) 00:08:00.637 24702.031 - 24802.855: 99.7321% ( 4) 00:08:00.637 24802.855 - 24903.680: 99.7489% ( 3) 00:08:00.637 24903.680 - 25004.505: 99.7712% ( 4) 00:08:00.637 25004.505 - 25105.329: 99.7935% ( 4) 00:08:00.637 25105.329 - 25206.154: 99.8103% ( 3) 00:08:00.637 25206.154 - 25306.978: 99.8326% ( 4) 00:08:00.637 25306.978 - 25407.803: 99.8549% ( 4) 00:08:00.637 25407.803 - 25508.628: 99.8772% ( 4) 00:08:00.637 25508.628 - 25609.452: 99.8940% ( 3) 00:08:00.637 25609.452 - 25710.277: 99.9163% ( 4) 00:08:00.637 25710.277 - 25811.102: 99.9386% ( 4) 00:08:00.637 25811.102 - 26012.751: 99.9833% ( 8) 00:08:00.637 26012.751 - 26214.400: 100.0000% ( 3) 00:08:00.637 00:08:00.637 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:00.637 ============================================================================== 00:08:00.637 Range in us Cumulative IO count 00:08:00.637 3503.655 - 3528.862: 0.0112% ( 2) 00:08:00.637 3528.862 - 3554.068: 0.0391% ( 5) 00:08:00.637 3554.068 - 3579.274: 0.0670% ( 5) 00:08:00.637 3579.274 - 3604.480: 0.0781% ( 2) 00:08:00.637 3604.480 - 3629.686: 0.0837% ( 1) 00:08:00.637 3629.686 - 3654.892: 0.0893% ( 1) 00:08:00.637 3654.892 - 3680.098: 0.1004% ( 2) 00:08:00.637 3680.098 - 3705.305: 0.1172% ( 3) 00:08:00.637 3705.305 - 3730.511: 0.1339% ( 3) 00:08:00.637 3730.511 - 3755.717: 0.1451% ( 2) 00:08:00.637 3755.717 - 3780.923: 0.1786% ( 6) 00:08:00.637 3780.923 - 3806.129: 0.1897% ( 2) 00:08:00.637 3806.129 - 3831.335: 0.2065% ( 3) 00:08:00.637 3831.335 - 3856.542: 0.2176% ( 2) 00:08:00.637 3856.542 - 3881.748: 0.2288% ( 2) 00:08:00.637 3881.748 - 3906.954: 0.2455% ( 3) 00:08:00.637 3906.954 - 3932.160: 0.2567% ( 2) 00:08:00.637 3932.160 - 3957.366: 0.2679% ( 2) 00:08:00.637 3957.366 - 3982.572: 0.2790% ( 2) 00:08:00.637 3982.572 - 4007.778: 0.2902% ( 2) 00:08:00.637 4007.778 - 4032.985: 0.3069% ( 3) 00:08:00.637 4032.985 - 4058.191: 0.3181% ( 2) 00:08:00.637 4058.191 - 4083.397: 0.3292% ( 2) 00:08:00.637 4083.397 - 4108.603: 0.3404% ( 2) 00:08:00.637 4108.603 - 4133.809: 0.3571% ( 3) 00:08:00.637 4940.406 - 4965.612: 0.4018% ( 8) 00:08:00.637 4965.612 - 4990.818: 0.4129% ( 2) 00:08:00.637 4990.818 - 5016.025: 0.4185% ( 1) 00:08:00.637 5016.025 - 5041.231: 0.4297% ( 2) 00:08:00.637 5041.231 - 5066.437: 0.4408% ( 2) 00:08:00.637 5066.437 - 5091.643: 0.4576% ( 3) 00:08:00.637 5091.643 - 5116.849: 0.4688% ( 2) 00:08:00.637 5116.849 - 5142.055: 0.4855% ( 3) 00:08:00.637 5142.055 - 5167.262: 0.4967% ( 2) 00:08:00.637 5167.262 - 5192.468: 0.5134% ( 3) 00:08:00.637 5192.468 - 5217.674: 0.5246% ( 2) 00:08:00.637 5217.674 - 5242.880: 0.5413% ( 3) 00:08:00.637 5242.880 - 5268.086: 0.5525% ( 2) 00:08:00.637 5268.086 - 5293.292: 0.5636% ( 2) 00:08:00.637 5293.292 - 5318.498: 0.5748% ( 2) 00:08:00.637 5318.498 - 5343.705: 0.5859% ( 2) 00:08:00.637 5343.705 - 5368.911: 0.5971% ( 2) 00:08:00.637 5368.911 - 5394.117: 0.6138% ( 3) 00:08:00.637 5394.117 - 5419.323: 0.6250% ( 2) 00:08:00.637 5419.323 - 5444.529: 0.6417% ( 3) 00:08:00.637 5444.529 - 5469.735: 0.6529% ( 2) 00:08:00.637 5469.735 - 5494.942: 0.6696% ( 3) 00:08:00.637 5494.942 - 5520.148: 0.6808% ( 2) 00:08:00.637 5520.148 - 5545.354: 0.6975% ( 3) 00:08:00.637 5545.354 - 5570.560: 0.7087% ( 2) 00:08:00.637 5570.560 - 5595.766: 0.7143% ( 1) 00:08:00.637 5721.797 - 5747.003: 0.7254% ( 2) 00:08:00.637 5747.003 - 5772.209: 0.7980% ( 13) 00:08:00.637 5772.209 - 5797.415: 0.8929% ( 17) 00:08:00.637 5797.415 - 5822.622: 0.9766% ( 15) 00:08:00.637 5822.622 - 5847.828: 1.1551% ( 32) 00:08:00.637 5847.828 - 5873.034: 1.5569% ( 72) 00:08:00.637 5873.034 - 5898.240: 2.3438% ( 141) 00:08:00.637 5898.240 - 5923.446: 3.4096% ( 191) 00:08:00.637 5923.446 - 5948.652: 4.9107% ( 269) 00:08:00.637 5948.652 - 5973.858: 6.5960% ( 302) 00:08:00.637 5973.858 - 5999.065: 8.3482% ( 314) 00:08:00.637 5999.065 - 6024.271: 10.1953% ( 331) 00:08:00.637 6024.271 - 6049.477: 12.0592% ( 334) 00:08:00.637 6049.477 - 6074.683: 14.0737% ( 361) 00:08:00.637 6074.683 - 6099.889: 16.3002% ( 399) 00:08:00.637 6099.889 - 6125.095: 18.4542% ( 386) 00:08:00.637 6125.095 - 6150.302: 20.6417% ( 392) 00:08:00.637 6150.302 - 6175.508: 22.7567% ( 379) 00:08:00.637 6175.508 - 6200.714: 24.8493% ( 375) 00:08:00.637 6200.714 - 6225.920: 26.9810% ( 382) 00:08:00.637 6225.920 - 6251.126: 29.1629% ( 391) 00:08:00.637 6251.126 - 6276.332: 31.2612% ( 376) 00:08:00.637 6276.332 - 6301.538: 33.3873% ( 381) 00:08:00.637 6301.538 - 6326.745: 35.5469% ( 387) 00:08:00.637 6326.745 - 6351.951: 37.7232% ( 390) 00:08:00.637 6351.951 - 6377.157: 39.9386% ( 397) 00:08:00.637 6377.157 - 6402.363: 42.1987% ( 405) 00:08:00.637 6402.363 - 6427.569: 44.3973% ( 394) 00:08:00.637 6427.569 - 6452.775: 46.6518% ( 404) 00:08:00.637 6452.775 - 6503.188: 51.2221% ( 819) 00:08:00.637 6503.188 - 6553.600: 55.8203% ( 824) 00:08:00.637 6553.600 - 6604.012: 60.3627% ( 814) 00:08:00.637 6604.012 - 6654.425: 64.8605% ( 806) 00:08:00.637 6654.425 - 6704.837: 68.8281% ( 711) 00:08:00.637 6704.837 - 6755.249: 71.9754% ( 564) 00:08:00.637 6755.249 - 6805.662: 74.2746% ( 412) 00:08:00.638 6805.662 - 6856.074: 75.8594% ( 284) 00:08:00.638 6856.074 - 6906.486: 77.0592% ( 215) 00:08:00.638 6906.486 - 6956.898: 77.9632% ( 162) 00:08:00.638 6956.898 - 7007.311: 78.5547% ( 106) 00:08:00.638 7007.311 - 7057.723: 79.0904% ( 96) 00:08:00.638 7057.723 - 7108.135: 79.5201% ( 77) 00:08:00.638 7108.135 - 7158.548: 79.9665% ( 80) 00:08:00.638 7158.548 - 7208.960: 80.4520% ( 87) 00:08:00.638 7208.960 - 7259.372: 80.9040% ( 81) 00:08:00.638 7259.372 - 7309.785: 81.3170% ( 74) 00:08:00.638 7309.785 - 7360.197: 81.6797% ( 65) 00:08:00.638 7360.197 - 7410.609: 81.9699% ( 52) 00:08:00.638 7410.609 - 7461.022: 82.2266% ( 46) 00:08:00.638 7461.022 - 7511.434: 82.4944% ( 48) 00:08:00.638 7511.434 - 7561.846: 82.7400% ( 44) 00:08:00.638 7561.846 - 7612.258: 82.9799% ( 43) 00:08:00.638 7612.258 - 7662.671: 83.2087% ( 41) 00:08:00.638 7662.671 - 7713.083: 83.4152% ( 37) 00:08:00.638 7713.083 - 7763.495: 83.6775% ( 47) 00:08:00.638 7763.495 - 7813.908: 83.9788% ( 54) 00:08:00.638 7813.908 - 7864.320: 84.1462% ( 30) 00:08:00.638 7864.320 - 7914.732: 84.2857% ( 25) 00:08:00.638 7914.732 - 7965.145: 84.4475% ( 29) 00:08:00.638 7965.145 - 8015.557: 84.6094% ( 29) 00:08:00.638 8015.557 - 8065.969: 84.7712% ( 29) 00:08:00.638 8065.969 - 8116.382: 84.8940% ( 22) 00:08:00.638 8116.382 - 8166.794: 85.0000% ( 19) 00:08:00.638 8166.794 - 8217.206: 85.1116% ( 20) 00:08:00.638 8217.206 - 8267.618: 85.2623% ( 27) 00:08:00.638 8267.618 - 8318.031: 85.4408% ( 32) 00:08:00.638 8318.031 - 8368.443: 85.6083% ( 30) 00:08:00.638 8368.443 - 8418.855: 85.7589% ( 27) 00:08:00.638 8418.855 - 8469.268: 85.9152% ( 28) 00:08:00.638 8469.268 - 8519.680: 86.0547% ( 25) 00:08:00.638 8519.680 - 8570.092: 86.1942% ( 25) 00:08:00.638 8570.092 - 8620.505: 86.3170% ( 22) 00:08:00.638 8620.505 - 8670.917: 86.4174% ( 18) 00:08:00.638 8670.917 - 8721.329: 86.5123% ( 17) 00:08:00.638 8721.329 - 8771.742: 86.6518% ( 25) 00:08:00.638 8771.742 - 8822.154: 86.8025% ( 27) 00:08:00.638 8822.154 - 8872.566: 86.9196% ( 21) 00:08:00.638 8872.566 - 8922.978: 87.0257% ( 19) 00:08:00.638 8922.978 - 8973.391: 87.1205% ( 17) 00:08:00.638 8973.391 - 9023.803: 87.2656% ( 26) 00:08:00.638 9023.803 - 9074.215: 87.4219% ( 28) 00:08:00.638 9074.215 - 9124.628: 87.5446% ( 22) 00:08:00.638 9124.628 - 9175.040: 87.6562% ( 20) 00:08:00.638 9175.040 - 9225.452: 87.7400% ( 15) 00:08:00.638 9225.452 - 9275.865: 87.8404% ( 18) 00:08:00.638 9275.865 - 9326.277: 87.9632% ( 22) 00:08:00.638 9326.277 - 9376.689: 88.0971% ( 24) 00:08:00.638 9376.689 - 9427.102: 88.2422% ( 26) 00:08:00.638 9427.102 - 9477.514: 88.4096% ( 30) 00:08:00.638 9477.514 - 9527.926: 88.5993% ( 34) 00:08:00.638 9527.926 - 9578.338: 88.7946% ( 35) 00:08:00.638 9578.338 - 9628.751: 88.9565% ( 29) 00:08:00.638 9628.751 - 9679.163: 89.1183% ( 29) 00:08:00.638 9679.163 - 9729.575: 89.2969% ( 32) 00:08:00.638 9729.575 - 9779.988: 89.4643% ( 30) 00:08:00.638 9779.988 - 9830.400: 89.6652% ( 36) 00:08:00.638 9830.400 - 9880.812: 89.8549% ( 34) 00:08:00.638 9880.812 - 9931.225: 90.0391% ( 33) 00:08:00.638 9931.225 - 9981.637: 90.2121% ( 31) 00:08:00.638 9981.637 - 10032.049: 90.3962% ( 33) 00:08:00.638 10032.049 - 10082.462: 90.5636% ( 30) 00:08:00.638 10082.462 - 10132.874: 90.7199% ( 28) 00:08:00.638 10132.874 - 10183.286: 90.8984% ( 32) 00:08:00.638 10183.286 - 10233.698: 91.0491% ( 27) 00:08:00.638 10233.698 - 10284.111: 91.1886% ( 25) 00:08:00.638 10284.111 - 10334.523: 91.3504% ( 29) 00:08:00.638 10334.523 - 10384.935: 91.5346% ( 33) 00:08:00.638 10384.935 - 10435.348: 91.7355% ( 36) 00:08:00.638 10435.348 - 10485.760: 91.8973% ( 29) 00:08:00.638 10485.760 - 10536.172: 92.0871% ( 34) 00:08:00.638 10536.172 - 10586.585: 92.3493% ( 47) 00:08:00.638 10586.585 - 10636.997: 92.5614% ( 38) 00:08:00.638 10636.997 - 10687.409: 92.7846% ( 40) 00:08:00.638 10687.409 - 10737.822: 93.0190% ( 42) 00:08:00.638 10737.822 - 10788.234: 93.2366% ( 39) 00:08:00.638 10788.234 - 10838.646: 93.4710% ( 42) 00:08:00.638 10838.646 - 10889.058: 93.7333% ( 47) 00:08:00.638 10889.058 - 10939.471: 94.0123% ( 50) 00:08:00.638 10939.471 - 10989.883: 94.2801% ( 48) 00:08:00.638 10989.883 - 11040.295: 94.5480% ( 48) 00:08:00.638 11040.295 - 11090.708: 94.8326% ( 51) 00:08:00.638 11090.708 - 11141.120: 95.0670% ( 42) 00:08:00.638 11141.120 - 11191.532: 95.2623% ( 35) 00:08:00.638 11191.532 - 11241.945: 95.4743% ( 38) 00:08:00.638 11241.945 - 11292.357: 95.6641% ( 34) 00:08:00.638 11292.357 - 11342.769: 95.8873% ( 40) 00:08:00.638 11342.769 - 11393.182: 96.0993% ( 38) 00:08:00.638 11393.182 - 11443.594: 96.2723% ( 31) 00:08:00.638 11443.594 - 11494.006: 96.4453% ( 31) 00:08:00.638 11494.006 - 11544.418: 96.6071% ( 29) 00:08:00.638 11544.418 - 11594.831: 96.7969% ( 34) 00:08:00.638 11594.831 - 11645.243: 96.9420% ( 26) 00:08:00.638 11645.243 - 11695.655: 97.0815% ( 25) 00:08:00.638 11695.655 - 11746.068: 97.2098% ( 23) 00:08:00.638 11746.068 - 11796.480: 97.3772% ( 30) 00:08:00.638 11796.480 - 11846.892: 97.4833% ( 19) 00:08:00.638 11846.892 - 11897.305: 97.5725% ( 16) 00:08:00.638 11897.305 - 11947.717: 97.6618% ( 16) 00:08:00.638 11947.717 - 11998.129: 97.7455% ( 15) 00:08:00.638 11998.129 - 12048.542: 97.8292% ( 15) 00:08:00.638 12048.542 - 12098.954: 97.9129% ( 15) 00:08:00.638 12098.954 - 12149.366: 98.0134% ( 18) 00:08:00.638 12149.366 - 12199.778: 98.1027% ( 16) 00:08:00.638 12199.778 - 12250.191: 98.1975% ( 17) 00:08:00.638 12250.191 - 12300.603: 98.2701% ( 13) 00:08:00.638 12300.603 - 12351.015: 98.3426% ( 13) 00:08:00.638 12351.015 - 12401.428: 98.4152% ( 13) 00:08:00.638 12401.428 - 12451.840: 98.4877% ( 13) 00:08:00.638 12451.840 - 12502.252: 98.5379% ( 9) 00:08:00.638 12502.252 - 12552.665: 98.5938% ( 10) 00:08:00.638 12552.665 - 12603.077: 98.6328% ( 7) 00:08:00.638 12603.077 - 12653.489: 98.6775% ( 8) 00:08:00.638 12653.489 - 12703.902: 98.7054% ( 5) 00:08:00.638 12703.902 - 12754.314: 98.7333% ( 5) 00:08:00.638 12754.314 - 12804.726: 98.7667% ( 6) 00:08:00.638 12804.726 - 12855.138: 98.7891% ( 4) 00:08:00.638 12855.138 - 12905.551: 98.8225% ( 6) 00:08:00.638 12905.551 - 13006.375: 98.8728% ( 9) 00:08:00.638 13006.375 - 13107.200: 98.9007% ( 5) 00:08:00.638 13107.200 - 13208.025: 98.9230% ( 4) 00:08:00.638 13208.025 - 13308.849: 98.9286% ( 1) 00:08:00.638 14216.271 - 14317.095: 98.9621% ( 6) 00:08:00.638 14317.095 - 14417.920: 99.0011% ( 7) 00:08:00.638 14417.920 - 14518.745: 99.0458% ( 8) 00:08:00.638 14518.745 - 14619.569: 99.0848% ( 7) 00:08:00.638 14619.569 - 14720.394: 99.1295% ( 8) 00:08:00.638 14720.394 - 14821.218: 99.1685% ( 7) 00:08:00.638 14821.218 - 14922.043: 99.2132% ( 8) 00:08:00.638 14922.043 - 15022.868: 99.2578% ( 8) 00:08:00.638 15022.868 - 15123.692: 99.2857% ( 5) 00:08:00.638 16232.763 - 16333.588: 99.2969% ( 2) 00:08:00.638 16333.588 - 16434.412: 99.3136% ( 3) 00:08:00.638 16434.412 - 16535.237: 99.3359% ( 4) 00:08:00.638 16535.237 - 16636.062: 99.3527% ( 3) 00:08:00.638 16636.062 - 16736.886: 99.3750% ( 4) 00:08:00.638 16736.886 - 16837.711: 99.3973% ( 4) 00:08:00.638 16837.711 - 16938.535: 99.4141% ( 3) 00:08:00.638 16938.535 - 17039.360: 99.4364% ( 4) 00:08:00.638 17039.360 - 17140.185: 99.4587% ( 4) 00:08:00.638 17140.185 - 17241.009: 99.4810% ( 4) 00:08:00.638 17241.009 - 17341.834: 99.4978% ( 3) 00:08:00.638 17341.834 - 17442.658: 99.5201% ( 4) 00:08:00.638 17442.658 - 17543.483: 99.5424% ( 4) 00:08:00.638 17543.483 - 17644.308: 99.5592% ( 3) 00:08:00.638 17644.308 - 17745.132: 99.5759% ( 3) 00:08:00.638 17745.132 - 17845.957: 99.5982% ( 4) 00:08:00.638 17845.957 - 17946.782: 99.6205% ( 4) 00:08:00.638 17946.782 - 18047.606: 99.6429% ( 4) 00:08:00.638 24399.557 - 24500.382: 99.6484% ( 1) 00:08:00.638 24500.382 - 24601.206: 99.6708% ( 4) 00:08:00.638 24601.206 - 24702.031: 99.6931% ( 4) 00:08:00.638 24702.031 - 24802.855: 99.7154% ( 4) 00:08:00.638 24802.855 - 24903.680: 99.7321% ( 3) 00:08:00.638 24903.680 - 25004.505: 99.7545% ( 4) 00:08:00.638 25004.505 - 25105.329: 99.7712% ( 3) 00:08:00.638 25105.329 - 25206.154: 99.7879% ( 3) 00:08:00.638 25206.154 - 25306.978: 99.8047% ( 3) 00:08:00.638 25306.978 - 25407.803: 99.8382% ( 6) 00:08:00.638 25407.803 - 25508.628: 99.8717% ( 6) 00:08:00.638 25508.628 - 25609.452: 99.9107% ( 7) 00:08:00.638 25609.452 - 25710.277: 99.9498% ( 7) 00:08:00.638 25710.277 - 25811.102: 99.9833% ( 6) 00:08:00.638 25811.102 - 26012.751: 100.0000% ( 3) 00:08:00.638 00:08:00.638 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:00.638 ============================================================================== 00:08:00.638 Range in us Cumulative IO count 00:08:00.638 3251.594 - 3276.800: 0.0112% ( 2) 00:08:00.638 3276.800 - 3302.006: 0.0391% ( 5) 00:08:00.638 3302.006 - 3327.212: 0.0614% ( 4) 00:08:00.638 3327.212 - 3352.418: 0.0837% ( 4) 00:08:00.638 3352.418 - 3377.625: 0.0949% ( 2) 00:08:00.638 3377.625 - 3402.831: 0.1060% ( 2) 00:08:00.638 3428.037 - 3453.243: 0.1172% ( 2) 00:08:00.638 3453.243 - 3478.449: 0.1339% ( 3) 00:08:00.638 3478.449 - 3503.655: 0.1451% ( 2) 00:08:00.638 3503.655 - 3528.862: 0.1562% ( 2) 00:08:00.638 3528.862 - 3554.068: 0.1674% ( 2) 00:08:00.638 3554.068 - 3579.274: 0.1842% ( 3) 00:08:00.638 3579.274 - 3604.480: 0.1953% ( 2) 00:08:00.638 3604.480 - 3629.686: 0.2009% ( 1) 00:08:00.638 3629.686 - 3654.892: 0.2232% ( 4) 00:08:00.638 3654.892 - 3680.098: 0.2344% ( 2) 00:08:00.638 3680.098 - 3705.305: 0.2400% ( 1) 00:08:00.638 3705.305 - 3730.511: 0.2567% ( 3) 00:08:00.638 3730.511 - 3755.717: 0.2679% ( 2) 00:08:00.639 3755.717 - 3780.923: 0.2790% ( 2) 00:08:00.639 3780.923 - 3806.129: 0.2902% ( 2) 00:08:00.639 3806.129 - 3831.335: 0.3013% ( 2) 00:08:00.639 3831.335 - 3856.542: 0.3125% ( 2) 00:08:00.639 3856.542 - 3881.748: 0.3237% ( 2) 00:08:00.639 3881.748 - 3906.954: 0.3404% ( 3) 00:08:00.639 3906.954 - 3932.160: 0.3516% ( 2) 00:08:00.639 3932.160 - 3957.366: 0.3571% ( 1) 00:08:00.639 4663.138 - 4688.345: 0.3627% ( 1) 00:08:00.639 4688.345 - 4713.551: 0.3739% ( 2) 00:08:00.639 4713.551 - 4738.757: 0.3906% ( 3) 00:08:00.639 4738.757 - 4763.963: 0.4018% ( 2) 00:08:00.639 4763.963 - 4789.169: 0.4129% ( 2) 00:08:00.639 4789.169 - 4814.375: 0.4297% ( 3) 00:08:00.639 4814.375 - 4839.582: 0.4408% ( 2) 00:08:00.639 4839.582 - 4864.788: 0.4520% ( 2) 00:08:00.639 4864.788 - 4889.994: 0.4632% ( 2) 00:08:00.639 4889.994 - 4915.200: 0.4799% ( 3) 00:08:00.639 4915.200 - 4940.406: 0.4911% ( 2) 00:08:00.639 4940.406 - 4965.612: 0.5078% ( 3) 00:08:00.639 4965.612 - 4990.818: 0.5190% ( 2) 00:08:00.639 4990.818 - 5016.025: 0.5301% ( 2) 00:08:00.639 5016.025 - 5041.231: 0.5469% ( 3) 00:08:00.639 5041.231 - 5066.437: 0.5580% ( 2) 00:08:00.639 5066.437 - 5091.643: 0.5692% ( 2) 00:08:00.639 5091.643 - 5116.849: 0.5804% ( 2) 00:08:00.639 5116.849 - 5142.055: 0.5915% ( 2) 00:08:00.639 5142.055 - 5167.262: 0.6083% ( 3) 00:08:00.639 5167.262 - 5192.468: 0.6194% ( 2) 00:08:00.639 5192.468 - 5217.674: 0.6362% ( 3) 00:08:00.639 5217.674 - 5242.880: 0.6473% ( 2) 00:08:00.639 5242.880 - 5268.086: 0.6585% ( 2) 00:08:00.639 5268.086 - 5293.292: 0.6752% ( 3) 00:08:00.639 5293.292 - 5318.498: 0.6864% ( 2) 00:08:00.639 5318.498 - 5343.705: 0.6975% ( 2) 00:08:00.639 5343.705 - 5368.911: 0.7087% ( 2) 00:08:00.639 5368.911 - 5394.117: 0.7143% ( 1) 00:08:00.639 5721.797 - 5747.003: 0.7199% ( 1) 00:08:00.639 5747.003 - 5772.209: 0.7478% ( 5) 00:08:00.639 5772.209 - 5797.415: 0.8929% ( 26) 00:08:00.639 5797.415 - 5822.622: 1.0379% ( 26) 00:08:00.639 5822.622 - 5847.828: 1.3560% ( 57) 00:08:00.639 5847.828 - 5873.034: 2.0424% ( 123) 00:08:00.639 5873.034 - 5898.240: 2.7623% ( 129) 00:08:00.639 5898.240 - 5923.446: 3.7444% ( 176) 00:08:00.639 5923.446 - 5948.652: 4.9721% ( 220) 00:08:00.639 5948.652 - 5973.858: 6.8527% ( 337) 00:08:00.639 5973.858 - 5999.065: 8.6496% ( 322) 00:08:00.639 5999.065 - 6024.271: 10.6306% ( 355) 00:08:00.639 6024.271 - 6049.477: 12.5112% ( 337) 00:08:00.639 6049.477 - 6074.683: 14.4029% ( 339) 00:08:00.639 6074.683 - 6099.889: 16.3393% ( 347) 00:08:00.639 6099.889 - 6125.095: 18.4542% ( 379) 00:08:00.639 6125.095 - 6150.302: 20.6975% ( 402) 00:08:00.639 6150.302 - 6175.508: 22.7679% ( 371) 00:08:00.639 6175.508 - 6200.714: 24.7545% ( 356) 00:08:00.639 6200.714 - 6225.920: 26.9364% ( 391) 00:08:00.639 6225.920 - 6251.126: 29.0848% ( 385) 00:08:00.639 6251.126 - 6276.332: 31.2667% ( 391) 00:08:00.639 6276.332 - 6301.538: 33.5156% ( 403) 00:08:00.639 6301.538 - 6326.745: 35.6306% ( 379) 00:08:00.639 6326.745 - 6351.951: 37.8627% ( 400) 00:08:00.639 6351.951 - 6377.157: 40.0614% ( 394) 00:08:00.639 6377.157 - 6402.363: 42.2266% ( 388) 00:08:00.639 6402.363 - 6427.569: 44.4922% ( 406) 00:08:00.639 6427.569 - 6452.775: 46.7243% ( 400) 00:08:00.639 6452.775 - 6503.188: 51.2388% ( 809) 00:08:00.639 6503.188 - 6553.600: 55.7645% ( 811) 00:08:00.639 6553.600 - 6604.012: 60.4074% ( 832) 00:08:00.639 6604.012 - 6654.425: 64.8214% ( 791) 00:08:00.639 6654.425 - 6704.837: 68.7779% ( 709) 00:08:00.639 6704.837 - 6755.249: 71.9866% ( 575) 00:08:00.639 6755.249 - 6805.662: 74.2578% ( 407) 00:08:00.639 6805.662 - 6856.074: 75.8594% ( 287) 00:08:00.639 6856.074 - 6906.486: 77.0871% ( 220) 00:08:00.639 6906.486 - 6956.898: 77.9353% ( 152) 00:08:00.639 6956.898 - 7007.311: 78.5491% ( 110) 00:08:00.639 7007.311 - 7057.723: 79.0458% ( 89) 00:08:00.639 7057.723 - 7108.135: 79.4754% ( 77) 00:08:00.639 7108.135 - 7158.548: 79.8996% ( 76) 00:08:00.639 7158.548 - 7208.960: 80.3460% ( 80) 00:08:00.639 7208.960 - 7259.372: 80.7701% ( 76) 00:08:00.639 7259.372 - 7309.785: 81.1663% ( 71) 00:08:00.639 7309.785 - 7360.197: 81.5123% ( 62) 00:08:00.639 7360.197 - 7410.609: 81.8471% ( 60) 00:08:00.639 7410.609 - 7461.022: 82.1819% ( 60) 00:08:00.639 7461.022 - 7511.434: 82.4888% ( 55) 00:08:00.639 7511.434 - 7561.846: 82.7790% ( 52) 00:08:00.639 7561.846 - 7612.258: 83.0357% ( 46) 00:08:00.639 7612.258 - 7662.671: 83.3147% ( 50) 00:08:00.639 7662.671 - 7713.083: 83.5658% ( 45) 00:08:00.639 7713.083 - 7763.495: 83.7891% ( 40) 00:08:00.639 7763.495 - 7813.908: 84.0067% ( 39) 00:08:00.639 7813.908 - 7864.320: 84.1964% ( 34) 00:08:00.639 7864.320 - 7914.732: 84.3583% ( 29) 00:08:00.639 7914.732 - 7965.145: 84.5033% ( 26) 00:08:00.639 7965.145 - 8015.557: 84.6540% ( 27) 00:08:00.639 8015.557 - 8065.969: 84.7935% ( 25) 00:08:00.639 8065.969 - 8116.382: 84.9498% ( 28) 00:08:00.639 8116.382 - 8166.794: 85.0781% ( 23) 00:08:00.639 8166.794 - 8217.206: 85.2065% ( 23) 00:08:00.639 8217.206 - 8267.618: 85.3516% ( 26) 00:08:00.639 8267.618 - 8318.031: 85.5022% ( 27) 00:08:00.639 8318.031 - 8368.443: 85.6752% ( 31) 00:08:00.639 8368.443 - 8418.855: 85.8315% ( 28) 00:08:00.639 8418.855 - 8469.268: 85.9710% ( 25) 00:08:00.639 8469.268 - 8519.680: 86.1217% ( 27) 00:08:00.639 8519.680 - 8570.092: 86.2612% ( 25) 00:08:00.639 8570.092 - 8620.505: 86.3895% ( 23) 00:08:00.639 8620.505 - 8670.917: 86.4788% ( 16) 00:08:00.639 8670.917 - 8721.329: 86.5960% ( 21) 00:08:00.639 8721.329 - 8771.742: 86.6908% ( 17) 00:08:00.639 8771.742 - 8822.154: 86.7913% ( 18) 00:08:00.639 8822.154 - 8872.566: 86.8806% ( 16) 00:08:00.639 8872.566 - 8922.978: 86.9922% ( 20) 00:08:00.639 8922.978 - 8973.391: 87.1261% ( 24) 00:08:00.639 8973.391 - 9023.803: 87.2210% ( 17) 00:08:00.639 9023.803 - 9074.215: 87.3158% ( 17) 00:08:00.639 9074.215 - 9124.628: 87.4163% ( 18) 00:08:00.639 9124.628 - 9175.040: 87.5279% ( 20) 00:08:00.639 9175.040 - 9225.452: 87.6730% ( 26) 00:08:00.639 9225.452 - 9275.865: 87.8348% ( 29) 00:08:00.639 9275.865 - 9326.277: 87.9799% ( 26) 00:08:00.639 9326.277 - 9376.689: 88.1250% ( 26) 00:08:00.639 9376.689 - 9427.102: 88.2924% ( 30) 00:08:00.639 9427.102 - 9477.514: 88.4877% ( 35) 00:08:00.639 9477.514 - 9527.926: 88.6775% ( 34) 00:08:00.639 9527.926 - 9578.338: 88.8504% ( 31) 00:08:00.639 9578.338 - 9628.751: 89.0513% ( 36) 00:08:00.639 9628.751 - 9679.163: 89.2578% ( 37) 00:08:00.639 9679.163 - 9729.575: 89.4699% ( 38) 00:08:00.639 9729.575 - 9779.988: 89.6596% ( 34) 00:08:00.639 9779.988 - 9830.400: 89.8326% ( 31) 00:08:00.639 9830.400 - 9880.812: 89.9888% ( 28) 00:08:00.639 9880.812 - 9931.225: 90.2121% ( 40) 00:08:00.639 9931.225 - 9981.637: 90.4129% ( 36) 00:08:00.639 9981.637 - 10032.049: 90.6138% ( 36) 00:08:00.639 10032.049 - 10082.462: 90.7924% ( 32) 00:08:00.639 10082.462 - 10132.874: 90.9766% ( 33) 00:08:00.639 10132.874 - 10183.286: 91.1496% ( 31) 00:08:00.639 10183.286 - 10233.698: 91.2667% ( 21) 00:08:00.639 10233.698 - 10284.111: 91.3951% ( 23) 00:08:00.639 10284.111 - 10334.523: 91.5290% ( 24) 00:08:00.639 10334.523 - 10384.935: 91.6908% ( 29) 00:08:00.639 10384.935 - 10435.348: 91.8583% ( 30) 00:08:00.639 10435.348 - 10485.760: 92.0145% ( 28) 00:08:00.639 10485.760 - 10536.172: 92.1429% ( 23) 00:08:00.639 10536.172 - 10586.585: 92.2879% ( 26) 00:08:00.639 10586.585 - 10636.997: 92.4833% ( 35) 00:08:00.639 10636.997 - 10687.409: 92.7009% ( 39) 00:08:00.639 10687.409 - 10737.822: 92.9297% ( 41) 00:08:00.639 10737.822 - 10788.234: 93.1473% ( 39) 00:08:00.639 10788.234 - 10838.646: 93.3426% ( 35) 00:08:00.639 10838.646 - 10889.058: 93.5714% ( 41) 00:08:00.639 10889.058 - 10939.471: 93.8337% ( 47) 00:08:00.639 10939.471 - 10989.883: 94.1071% ( 49) 00:08:00.639 10989.883 - 11040.295: 94.3638% ( 46) 00:08:00.639 11040.295 - 11090.708: 94.6150% ( 45) 00:08:00.639 11090.708 - 11141.120: 94.8940% ( 50) 00:08:00.639 11141.120 - 11191.532: 95.1451% ( 45) 00:08:00.639 11191.532 - 11241.945: 95.3962% ( 45) 00:08:00.639 11241.945 - 11292.357: 95.6250% ( 41) 00:08:00.639 11292.357 - 11342.769: 95.8650% ( 43) 00:08:00.639 11342.769 - 11393.182: 96.0993% ( 42) 00:08:00.639 11393.182 - 11443.594: 96.3225% ( 40) 00:08:00.639 11443.594 - 11494.006: 96.5179% ( 35) 00:08:00.639 11494.006 - 11544.418: 96.7132% ( 35) 00:08:00.639 11544.418 - 11594.831: 96.8973% ( 33) 00:08:00.639 11594.831 - 11645.243: 97.0424% ( 26) 00:08:00.639 11645.243 - 11695.655: 97.1763% ( 24) 00:08:00.639 11695.655 - 11746.068: 97.2935% ( 21) 00:08:00.639 11746.068 - 11796.480: 97.4051% ( 20) 00:08:00.639 11796.480 - 11846.892: 97.5167% ( 20) 00:08:00.639 11846.892 - 11897.305: 97.6395% ( 22) 00:08:00.639 11897.305 - 11947.717: 97.7455% ( 19) 00:08:00.639 11947.717 - 11998.129: 97.8292% ( 15) 00:08:00.639 11998.129 - 12048.542: 97.8906% ( 11) 00:08:00.639 12048.542 - 12098.954: 97.9576% ( 12) 00:08:00.639 12098.954 - 12149.366: 98.0190% ( 11) 00:08:00.639 12149.366 - 12199.778: 98.0859% ( 12) 00:08:00.639 12199.778 - 12250.191: 98.1362% ( 9) 00:08:00.639 12250.191 - 12300.603: 98.1752% ( 7) 00:08:00.639 12300.603 - 12351.015: 98.2087% ( 6) 00:08:00.639 12351.015 - 12401.428: 98.2757% ( 12) 00:08:00.639 12401.428 - 12451.840: 98.3203% ( 8) 00:08:00.639 12451.840 - 12502.252: 98.3761% ( 10) 00:08:00.639 12502.252 - 12552.665: 98.4040% ( 5) 00:08:00.639 12552.665 - 12603.077: 98.4431% ( 7) 00:08:00.639 12603.077 - 12653.489: 98.4877% ( 8) 00:08:00.640 12653.489 - 12703.902: 98.5324% ( 8) 00:08:00.640 12703.902 - 12754.314: 98.5714% ( 7) 00:08:00.640 12754.314 - 12804.726: 98.6161% ( 8) 00:08:00.640 12804.726 - 12855.138: 98.6551% ( 7) 00:08:00.640 12855.138 - 12905.551: 98.6830% ( 5) 00:08:00.640 12905.551 - 13006.375: 98.7444% ( 11) 00:08:00.640 13006.375 - 13107.200: 98.8002% ( 10) 00:08:00.640 13107.200 - 13208.025: 98.8672% ( 12) 00:08:00.640 13208.025 - 13308.849: 98.9118% ( 8) 00:08:00.640 13308.849 - 13409.674: 98.9621% ( 9) 00:08:00.640 13409.674 - 13510.498: 98.9788% ( 3) 00:08:00.640 13510.498 - 13611.323: 99.0011% ( 4) 00:08:00.640 13611.323 - 13712.148: 99.0234% ( 4) 00:08:00.640 13712.148 - 13812.972: 99.0458% ( 4) 00:08:00.640 13812.972 - 13913.797: 99.0681% ( 4) 00:08:00.640 13913.797 - 14014.622: 99.0904% ( 4) 00:08:00.640 14014.622 - 14115.446: 99.1127% ( 4) 00:08:00.640 14115.446 - 14216.271: 99.1350% ( 4) 00:08:00.640 14216.271 - 14317.095: 99.1574% ( 4) 00:08:00.640 14317.095 - 14417.920: 99.1741% ( 3) 00:08:00.640 14417.920 - 14518.745: 99.1908% ( 3) 00:08:00.640 14518.745 - 14619.569: 99.2076% ( 3) 00:08:00.640 14619.569 - 14720.394: 99.2299% ( 4) 00:08:00.640 14720.394 - 14821.218: 99.2467% ( 3) 00:08:00.640 14821.218 - 14922.043: 99.2690% ( 4) 00:08:00.640 14922.043 - 15022.868: 99.2857% ( 3) 00:08:00.640 16434.412 - 16535.237: 99.3136% ( 5) 00:08:00.640 16535.237 - 16636.062: 99.3304% ( 3) 00:08:00.640 16636.062 - 16736.886: 99.3527% ( 4) 00:08:00.640 16736.886 - 16837.711: 99.3750% ( 4) 00:08:00.640 16837.711 - 16938.535: 99.3917% ( 3) 00:08:00.640 16938.535 - 17039.360: 99.4196% ( 5) 00:08:00.640 17039.360 - 17140.185: 99.4364% ( 3) 00:08:00.640 17140.185 - 17241.009: 99.4587% ( 4) 00:08:00.640 17241.009 - 17341.834: 99.4866% ( 5) 00:08:00.640 17341.834 - 17442.658: 99.5033% ( 3) 00:08:00.640 17442.658 - 17543.483: 99.5257% ( 4) 00:08:00.640 17543.483 - 17644.308: 99.5480% ( 4) 00:08:00.640 17644.308 - 17745.132: 99.5647% ( 3) 00:08:00.640 17745.132 - 17845.957: 99.5815% ( 3) 00:08:00.640 17845.957 - 17946.782: 99.6038% ( 4) 00:08:00.640 17946.782 - 18047.606: 99.6261% ( 4) 00:08:00.640 18047.606 - 18148.431: 99.6429% ( 3) 00:08:00.640 24097.083 - 24197.908: 99.6596% ( 3) 00:08:00.640 24197.908 - 24298.732: 99.6987% ( 7) 00:08:00.640 24298.732 - 24399.557: 99.7321% ( 6) 00:08:00.640 24399.557 - 24500.382: 99.7712% ( 7) 00:08:00.640 24500.382 - 24601.206: 99.8047% ( 6) 00:08:00.640 24601.206 - 24702.031: 99.8382% ( 6) 00:08:00.640 24702.031 - 24802.855: 99.8661% ( 5) 00:08:00.640 24802.855 - 24903.680: 99.9051% ( 7) 00:08:00.640 24903.680 - 25004.505: 99.9386% ( 6) 00:08:00.640 25004.505 - 25105.329: 99.9777% ( 7) 00:08:00.640 25105.329 - 25206.154: 100.0000% ( 4) 00:08:00.640 00:08:00.640 00:53:23 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:01.576 Initializing NVMe Controllers 00:08:01.576 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:01.576 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:01.576 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:01.576 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:01.576 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:01.576 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:01.576 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:01.576 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:01.576 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:01.576 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:01.576 Initialization complete. Launching workers. 00:08:01.576 ======================================================== 00:08:01.576 Latency(us) 00:08:01.576 Device Information : IOPS MiB/s Average min max 00:08:01.576 PCIE (0000:00:10.0) NSID 1 from core 0: 17855.45 209.24 7169.96 5038.44 23066.20 00:08:01.576 PCIE (0000:00:11.0) NSID 1 from core 0: 17855.45 209.24 7164.04 4674.38 21765.26 00:08:01.576 PCIE (0000:00:13.0) NSID 1 from core 0: 17855.45 209.24 7158.48 4012.69 22061.58 00:08:01.576 PCIE (0000:00:12.0) NSID 1 from core 0: 17855.45 209.24 7152.60 4000.48 21553.24 00:08:01.576 PCIE (0000:00:12.0) NSID 2 from core 0: 17855.45 209.24 7146.64 3774.07 20834.22 00:08:01.576 PCIE (0000:00:12.0) NSID 3 from core 0: 17855.45 209.24 7140.85 3517.11 20093.69 00:08:01.576 ======================================================== 00:08:01.576 Total : 107132.68 1255.46 7155.43 3517.11 23066.20 00:08:01.576 00:08:01.576 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:01.576 ================================================================================= 00:08:01.576 1.00000% : 6049.477us 00:08:01.576 10.00000% : 6402.363us 00:08:01.576 25.00000% : 6604.012us 00:08:01.576 50.00000% : 6906.486us 00:08:01.576 75.00000% : 7259.372us 00:08:01.576 90.00000% : 7914.732us 00:08:01.576 95.00000% : 8670.917us 00:08:01.576 98.00000% : 11393.182us 00:08:01.576 99.00000% : 15123.692us 00:08:01.576 99.50000% : 16736.886us 00:08:01.576 99.90000% : 22584.714us 00:08:01.576 99.99000% : 23088.837us 00:08:01.576 99.99900% : 23088.837us 00:08:01.576 99.99990% : 23088.837us 00:08:01.576 99.99999% : 23088.837us 00:08:01.576 00:08:01.576 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:01.576 ================================================================================= 00:08:01.576 1.00000% : 6125.095us 00:08:01.576 10.00000% : 6503.188us 00:08:01.576 25.00000% : 6654.425us 00:08:01.576 50.00000% : 6856.074us 00:08:01.577 75.00000% : 7158.548us 00:08:01.577 90.00000% : 7864.320us 00:08:01.577 95.00000% : 8922.978us 00:08:01.577 98.00000% : 11594.831us 00:08:01.577 99.00000% : 15728.640us 00:08:01.577 99.50000% : 17140.185us 00:08:01.577 99.90000% : 21374.818us 00:08:01.577 99.99000% : 21778.117us 00:08:01.577 99.99900% : 21778.117us 00:08:01.577 99.99990% : 21778.117us 00:08:01.577 99.99999% : 21778.117us 00:08:01.577 00:08:01.577 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:01.577 ================================================================================= 00:08:01.577 1.00000% : 6125.095us 00:08:01.577 10.00000% : 6503.188us 00:08:01.577 25.00000% : 6654.425us 00:08:01.577 50.00000% : 6856.074us 00:08:01.577 75.00000% : 7158.548us 00:08:01.577 90.00000% : 7813.908us 00:08:01.577 95.00000% : 9074.215us 00:08:01.577 98.00000% : 11746.068us 00:08:01.577 99.00000% : 16131.938us 00:08:01.577 99.50000% : 17140.185us 00:08:01.577 99.90000% : 21273.994us 00:08:01.577 99.99000% : 21576.468us 00:08:01.577 99.99900% : 22080.591us 00:08:01.577 99.99990% : 22080.591us 00:08:01.577 99.99999% : 22080.591us 00:08:01.577 00:08:01.577 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:01.577 ================================================================================= 00:08:01.577 1.00000% : 6150.302us 00:08:01.577 10.00000% : 6503.188us 00:08:01.577 25.00000% : 6654.425us 00:08:01.577 50.00000% : 6856.074us 00:08:01.577 75.00000% : 7158.548us 00:08:01.577 90.00000% : 7763.495us 00:08:01.577 95.00000% : 8973.391us 00:08:01.577 98.00000% : 11846.892us 00:08:01.577 99.00000% : 16232.763us 00:08:01.577 99.50000% : 16636.062us 00:08:01.577 99.90000% : 21173.169us 00:08:01.577 99.99000% : 21576.468us 00:08:01.577 99.99900% : 21576.468us 00:08:01.577 99.99990% : 21576.468us 00:08:01.577 99.99999% : 21576.468us 00:08:01.577 00:08:01.577 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:01.577 ================================================================================= 00:08:01.577 1.00000% : 6099.889us 00:08:01.577 10.00000% : 6503.188us 00:08:01.577 25.00000% : 6654.425us 00:08:01.577 50.00000% : 6856.074us 00:08:01.577 75.00000% : 7158.548us 00:08:01.577 90.00000% : 7813.908us 00:08:01.577 95.00000% : 8771.742us 00:08:01.577 98.00000% : 11544.418us 00:08:01.577 99.00000% : 15829.465us 00:08:01.577 99.50000% : 16535.237us 00:08:01.577 99.90000% : 20568.222us 00:08:01.577 99.99000% : 20870.695us 00:08:01.577 99.99900% : 20870.695us 00:08:01.577 99.99990% : 20870.695us 00:08:01.577 99.99999% : 20870.695us 00:08:01.577 00:08:01.577 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:01.577 ================================================================================= 00:08:01.577 1.00000% : 6099.889us 00:08:01.577 10.00000% : 6503.188us 00:08:01.577 25.00000% : 6654.425us 00:08:01.577 50.00000% : 6856.074us 00:08:01.577 75.00000% : 7158.548us 00:08:01.577 90.00000% : 7813.908us 00:08:01.577 95.00000% : 8570.092us 00:08:01.577 98.00000% : 11292.357us 00:08:01.577 99.00000% : 15627.815us 00:08:01.577 99.50000% : 16333.588us 00:08:01.577 99.90000% : 19862.449us 00:08:01.577 99.99000% : 20164.923us 00:08:01.577 99.99900% : 20164.923us 00:08:01.577 99.99990% : 20164.923us 00:08:01.577 99.99999% : 20164.923us 00:08:01.577 00:08:01.577 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:01.577 ============================================================================== 00:08:01.577 Range in us Cumulative IO count 00:08:01.577 5016.025 - 5041.231: 0.0280% ( 5) 00:08:01.577 5041.231 - 5066.437: 0.1120% ( 15) 00:08:01.577 5066.437 - 5091.643: 0.1848% ( 13) 00:08:01.577 5091.643 - 5116.849: 0.1960% ( 2) 00:08:01.577 5116.849 - 5142.055: 0.2072% ( 2) 00:08:01.577 5142.055 - 5167.262: 0.2128% ( 1) 00:08:01.577 5167.262 - 5192.468: 0.2296% ( 3) 00:08:01.577 5192.468 - 5217.674: 0.2408% ( 2) 00:08:01.577 5217.674 - 5242.880: 0.2576% ( 3) 00:08:01.577 5242.880 - 5268.086: 0.2632% ( 1) 00:08:01.577 5268.086 - 5293.292: 0.2744% ( 2) 00:08:01.577 5293.292 - 5318.498: 0.2800% ( 1) 00:08:01.577 5318.498 - 5343.705: 0.2856% ( 1) 00:08:01.577 5343.705 - 5368.911: 0.2912% ( 1) 00:08:01.577 5368.911 - 5394.117: 0.3024% ( 2) 00:08:01.577 5394.117 - 5419.323: 0.3080% ( 1) 00:08:01.577 5419.323 - 5444.529: 0.3304% ( 4) 00:08:01.577 5444.529 - 5469.735: 0.3416% ( 2) 00:08:01.577 5469.735 - 5494.942: 0.3472% ( 1) 00:08:01.577 5494.942 - 5520.148: 0.3528% ( 1) 00:08:01.577 5520.148 - 5545.354: 0.3584% ( 1) 00:08:01.577 5646.178 - 5671.385: 0.3696% ( 2) 00:08:01.577 5671.385 - 5696.591: 0.3752% ( 1) 00:08:01.577 5696.591 - 5721.797: 0.3920% ( 3) 00:08:01.577 5721.797 - 5747.003: 0.4088% ( 3) 00:08:01.577 5747.003 - 5772.209: 0.4200% ( 2) 00:08:01.577 5772.209 - 5797.415: 0.4424% ( 4) 00:08:01.577 5797.415 - 5822.622: 0.4648% ( 4) 00:08:01.577 5822.622 - 5847.828: 0.4872% ( 4) 00:08:01.577 5847.828 - 5873.034: 0.5152% ( 5) 00:08:01.577 5873.034 - 5898.240: 0.5600% ( 8) 00:08:01.577 5898.240 - 5923.446: 0.6496% ( 16) 00:08:01.577 5923.446 - 5948.652: 0.7224% ( 13) 00:08:01.577 5948.652 - 5973.858: 0.8065% ( 15) 00:08:01.577 5973.858 - 5999.065: 0.8681% ( 11) 00:08:01.577 5999.065 - 6024.271: 0.9857% ( 21) 00:08:01.577 6024.271 - 6049.477: 1.0753% ( 16) 00:08:01.577 6049.477 - 6074.683: 1.1929% ( 21) 00:08:01.577 6074.683 - 6099.889: 1.3777% ( 33) 00:08:01.577 6099.889 - 6125.095: 1.6017% ( 40) 00:08:01.577 6125.095 - 6150.302: 1.8817% ( 50) 00:08:01.577 6150.302 - 6175.508: 2.1953% ( 56) 00:08:01.577 6175.508 - 6200.714: 2.6826% ( 87) 00:08:01.577 6200.714 - 6225.920: 3.2034% ( 93) 00:08:01.577 6225.920 - 6251.126: 3.8194% ( 110) 00:08:01.577 6251.126 - 6276.332: 4.8107% ( 177) 00:08:01.577 6276.332 - 6301.538: 6.1212% ( 234) 00:08:01.577 6301.538 - 6326.745: 7.4317% ( 234) 00:08:01.577 6326.745 - 6351.951: 8.6470% ( 217) 00:08:01.577 6351.951 - 6377.157: 9.9294% ( 229) 00:08:01.577 6377.157 - 6402.363: 11.4583% ( 273) 00:08:01.577 6402.363 - 6427.569: 12.9536% ( 267) 00:08:01.577 6427.569 - 6452.775: 14.4881% ( 274) 00:08:01.577 6452.775 - 6503.188: 17.4787% ( 534) 00:08:01.577 6503.188 - 6553.600: 20.8165% ( 596) 00:08:01.577 6553.600 - 6604.012: 25.0560% ( 757) 00:08:01.577 6604.012 - 6654.425: 29.6259% ( 816) 00:08:01.577 6654.425 - 6704.837: 34.7446% ( 914) 00:08:01.577 6704.837 - 6755.249: 40.1042% ( 957) 00:08:01.577 6755.249 - 6805.662: 44.5621% ( 796) 00:08:01.577 6805.662 - 6856.074: 49.5464% ( 890) 00:08:01.577 6856.074 - 6906.486: 54.1779% ( 827) 00:08:01.577 6906.486 - 6956.898: 57.9357% ( 671) 00:08:01.577 6956.898 - 7007.311: 62.2480% ( 770) 00:08:01.577 7007.311 - 7057.723: 65.8322% ( 640) 00:08:01.577 7057.723 - 7108.135: 69.0300% ( 571) 00:08:01.577 7108.135 - 7158.548: 71.6958% ( 476) 00:08:01.577 7158.548 - 7208.960: 74.3840% ( 480) 00:08:01.577 7208.960 - 7259.372: 76.7697% ( 426) 00:08:01.578 7259.372 - 7309.785: 78.8418% ( 370) 00:08:01.578 7309.785 - 7360.197: 80.3819% ( 275) 00:08:01.578 7360.197 - 7410.609: 81.9836% ( 286) 00:08:01.578 7410.609 - 7461.022: 83.5797% ( 285) 00:08:01.578 7461.022 - 7511.434: 84.8566% ( 228) 00:08:01.578 7511.434 - 7561.846: 85.9711% ( 199) 00:08:01.578 7561.846 - 7612.258: 87.0072% ( 185) 00:08:01.578 7612.258 - 7662.671: 87.7128% ( 126) 00:08:01.578 7662.671 - 7713.083: 88.2168% ( 90) 00:08:01.578 7713.083 - 7763.495: 88.8105% ( 106) 00:08:01.578 7763.495 - 7813.908: 89.3369% ( 94) 00:08:01.578 7813.908 - 7864.320: 89.9418% ( 108) 00:08:01.578 7864.320 - 7914.732: 90.5466% ( 108) 00:08:01.578 7914.732 - 7965.145: 90.9498% ( 72) 00:08:01.578 7965.145 - 8015.557: 91.4427% ( 88) 00:08:01.578 8015.557 - 8065.969: 91.8067% ( 65) 00:08:01.578 8065.969 - 8116.382: 92.1819% ( 67) 00:08:01.578 8116.382 - 8166.794: 92.5683% ( 69) 00:08:01.578 8166.794 - 8217.206: 92.9716% ( 72) 00:08:01.578 8217.206 - 8267.618: 93.2908% ( 57) 00:08:01.578 8267.618 - 8318.031: 93.5596% ( 48) 00:08:01.578 8318.031 - 8368.443: 93.7612% ( 36) 00:08:01.578 8368.443 - 8418.855: 93.9404% ( 32) 00:08:01.578 8418.855 - 8469.268: 94.1252% ( 33) 00:08:01.578 8469.268 - 8519.680: 94.3604% ( 42) 00:08:01.578 8519.680 - 8570.092: 94.5509% ( 34) 00:08:01.578 8570.092 - 8620.505: 94.7805% ( 41) 00:08:01.578 8620.505 - 8670.917: 95.0101% ( 41) 00:08:01.578 8670.917 - 8721.329: 95.1221% ( 20) 00:08:01.578 8721.329 - 8771.742: 95.2901% ( 30) 00:08:01.578 8771.742 - 8822.154: 95.4245% ( 24) 00:08:01.578 8822.154 - 8872.566: 95.5421% ( 21) 00:08:01.578 8872.566 - 8922.978: 95.6037% ( 11) 00:08:01.578 8922.978 - 8973.391: 95.7045% ( 18) 00:08:01.578 8973.391 - 9023.803: 95.8053% ( 18) 00:08:01.578 9023.803 - 9074.215: 95.9117% ( 19) 00:08:01.578 9074.215 - 9124.628: 96.0349% ( 22) 00:08:01.578 9124.628 - 9175.040: 96.1246% ( 16) 00:08:01.578 9175.040 - 9225.452: 96.2142% ( 16) 00:08:01.578 9225.452 - 9275.865: 96.2534% ( 7) 00:08:01.578 9275.865 - 9326.277: 96.2702% ( 3) 00:08:01.578 9326.277 - 9376.689: 96.2982% ( 5) 00:08:01.578 9376.689 - 9427.102: 96.3038% ( 1) 00:08:01.578 9427.102 - 9477.514: 96.3262% ( 4) 00:08:01.578 9477.514 - 9527.926: 96.3374% ( 2) 00:08:01.578 9527.926 - 9578.338: 96.3766% ( 7) 00:08:01.578 9628.751 - 9679.163: 96.3822% ( 1) 00:08:01.578 9679.163 - 9729.575: 96.4102% ( 5) 00:08:01.578 9729.575 - 9779.988: 96.4774% ( 12) 00:08:01.578 9779.988 - 9830.400: 96.5838% ( 19) 00:08:01.578 9830.400 - 9880.812: 96.6678% ( 15) 00:08:01.578 9880.812 - 9931.225: 96.7518% ( 15) 00:08:01.578 9931.225 - 9981.637: 96.7910% ( 7) 00:08:01.578 9981.637 - 10032.049: 96.8470% ( 10) 00:08:01.578 10032.049 - 10082.462: 96.8974% ( 9) 00:08:01.578 10082.462 - 10132.874: 96.9366% ( 7) 00:08:01.578 10132.874 - 10183.286: 96.9646% ( 5) 00:08:01.578 10183.286 - 10233.698: 97.0094% ( 8) 00:08:01.578 10233.698 - 10284.111: 97.0598% ( 9) 00:08:01.578 10284.111 - 10334.523: 97.1046% ( 8) 00:08:01.578 10334.523 - 10384.935: 97.1438% ( 7) 00:08:01.578 10384.935 - 10435.348: 97.1942% ( 9) 00:08:01.578 10435.348 - 10485.760: 97.2334% ( 7) 00:08:01.578 10485.760 - 10536.172: 97.2726% ( 7) 00:08:01.578 10536.172 - 10586.585: 97.3118% ( 7) 00:08:01.578 10586.585 - 10636.997: 97.3454% ( 6) 00:08:01.578 10636.997 - 10687.409: 97.3902% ( 8) 00:08:01.578 10687.409 - 10737.822: 97.4182% ( 5) 00:08:01.578 10737.822 - 10788.234: 97.4630% ( 8) 00:08:01.578 10788.234 - 10838.646: 97.5022% ( 7) 00:08:01.578 10838.646 - 10889.058: 97.5470% ( 8) 00:08:01.578 10889.058 - 10939.471: 97.6030% ( 10) 00:08:01.578 10939.471 - 10989.883: 97.6478% ( 8) 00:08:01.578 10989.883 - 11040.295: 97.6591% ( 2) 00:08:01.578 11040.295 - 11090.708: 97.6759% ( 3) 00:08:01.578 11090.708 - 11141.120: 97.6983% ( 4) 00:08:01.578 11141.120 - 11191.532: 97.7655% ( 12) 00:08:01.578 11191.532 - 11241.945: 97.8215% ( 10) 00:08:01.578 11241.945 - 11292.357: 97.8495% ( 5) 00:08:01.578 11292.357 - 11342.769: 97.9615% ( 20) 00:08:01.578 11342.769 - 11393.182: 98.0119% ( 9) 00:08:01.578 11393.182 - 11443.594: 98.0623% ( 9) 00:08:01.578 11443.594 - 11494.006: 98.1127% ( 9) 00:08:01.578 11494.006 - 11544.418: 98.1463% ( 6) 00:08:01.578 11544.418 - 11594.831: 98.1911% ( 8) 00:08:01.578 11594.831 - 11645.243: 98.2191% ( 5) 00:08:01.578 11645.243 - 11695.655: 98.2415% ( 4) 00:08:01.578 11695.655 - 11746.068: 98.2919% ( 9) 00:08:01.578 11746.068 - 11796.480: 98.3311% ( 7) 00:08:01.578 11796.480 - 11846.892: 98.3535% ( 4) 00:08:01.578 11846.892 - 11897.305: 98.3815% ( 5) 00:08:01.578 11897.305 - 11947.717: 98.3983% ( 3) 00:08:01.578 11947.717 - 11998.129: 98.4263% ( 5) 00:08:01.578 11998.129 - 12048.542: 98.4431% ( 3) 00:08:01.578 12048.542 - 12098.954: 98.4543% ( 2) 00:08:01.578 12098.954 - 12149.366: 98.4655% ( 2) 00:08:01.578 12149.366 - 12199.778: 98.4823% ( 3) 00:08:01.578 12199.778 - 12250.191: 98.4991% ( 3) 00:08:01.578 12250.191 - 12300.603: 98.5047% ( 1) 00:08:01.578 12300.603 - 12351.015: 98.5103% ( 1) 00:08:01.578 12351.015 - 12401.428: 98.5271% ( 3) 00:08:01.578 12401.428 - 12451.840: 98.5383% ( 2) 00:08:01.578 12502.252 - 12552.665: 98.5551% ( 3) 00:08:01.578 12552.665 - 12603.077: 98.5607% ( 1) 00:08:01.578 12603.077 - 12653.489: 98.5663% ( 1) 00:08:01.578 14014.622 - 14115.446: 98.5719% ( 1) 00:08:01.578 14216.271 - 14317.095: 98.5887% ( 3) 00:08:01.578 14317.095 - 14417.920: 98.6279% ( 7) 00:08:01.578 14417.920 - 14518.745: 98.6895% ( 11) 00:08:01.578 14518.745 - 14619.569: 98.7679% ( 14) 00:08:01.578 14619.569 - 14720.394: 98.8519% ( 15) 00:08:01.578 14720.394 - 14821.218: 98.9023% ( 9) 00:08:01.578 14821.218 - 14922.043: 98.9359% ( 6) 00:08:01.578 14922.043 - 15022.868: 98.9975% ( 11) 00:08:01.578 15022.868 - 15123.692: 99.0255% ( 5) 00:08:01.578 15123.692 - 15224.517: 99.0647% ( 7) 00:08:01.578 15224.517 - 15325.342: 99.0983% ( 6) 00:08:01.578 15325.342 - 15426.166: 99.1207% ( 4) 00:08:01.578 15426.166 - 15526.991: 99.1543% ( 6) 00:08:01.578 15526.991 - 15627.815: 99.1879% ( 6) 00:08:01.578 15627.815 - 15728.640: 99.2272% ( 7) 00:08:01.578 15728.640 - 15829.465: 99.2552% ( 5) 00:08:01.578 15829.465 - 15930.289: 99.2720% ( 3) 00:08:01.578 15930.289 - 16031.114: 99.2832% ( 2) 00:08:01.578 16031.114 - 16131.938: 99.2888% ( 1) 00:08:01.578 16131.938 - 16232.763: 99.3224% ( 6) 00:08:01.578 16232.763 - 16333.588: 99.3392% ( 3) 00:08:01.578 16333.588 - 16434.412: 99.3952% ( 10) 00:08:01.578 16434.412 - 16535.237: 99.4512% ( 10) 00:08:01.578 16535.237 - 16636.062: 99.4792% ( 5) 00:08:01.578 16636.062 - 16736.886: 99.5016% ( 4) 00:08:01.578 16736.886 - 16837.711: 99.5184% ( 3) 00:08:01.578 16837.711 - 16938.535: 99.5352% ( 3) 00:08:01.578 16938.535 - 17039.360: 99.5576% ( 4) 00:08:01.578 17039.360 - 17140.185: 99.5856% ( 5) 00:08:01.578 17140.185 - 17241.009: 99.6080% ( 4) 00:08:01.578 17241.009 - 17341.834: 99.6248% ( 3) 00:08:01.578 17341.834 - 17442.658: 99.6416% ( 3) 00:08:01.578 21173.169 - 21273.994: 99.6472% ( 1) 00:08:01.578 21273.994 - 21374.818: 99.6528% ( 1) 00:08:01.578 21374.818 - 21475.643: 99.6640% ( 2) 00:08:01.578 21576.468 - 21677.292: 99.7032% ( 7) 00:08:01.578 21677.292 - 21778.117: 99.7368% ( 6) 00:08:01.578 21778.117 - 21878.942: 99.7704% ( 6) 00:08:01.578 21878.942 - 21979.766: 99.7816% ( 2) 00:08:01.578 21979.766 - 22080.591: 99.8152% ( 6) 00:08:01.579 22080.591 - 22181.415: 99.8320% ( 3) 00:08:01.579 22181.415 - 22282.240: 99.8488% ( 3) 00:08:01.579 22282.240 - 22383.065: 99.8656% ( 3) 00:08:01.579 22383.065 - 22483.889: 99.8936% ( 5) 00:08:01.579 22483.889 - 22584.714: 99.9104% ( 3) 00:08:01.579 22685.538 - 22786.363: 99.9216% ( 2) 00:08:01.579 22786.363 - 22887.188: 99.9496% ( 5) 00:08:01.579 22887.188 - 22988.012: 99.9776% ( 5) 00:08:01.579 22988.012 - 23088.837: 100.0000% ( 4) 00:08:01.579 00:08:01.579 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:01.579 ============================================================================== 00:08:01.579 Range in us Cumulative IO count 00:08:01.579 4663.138 - 4688.345: 0.0056% ( 1) 00:08:01.579 4713.551 - 4738.757: 0.0112% ( 1) 00:08:01.579 4738.757 - 4763.963: 0.0280% ( 3) 00:08:01.579 4763.963 - 4789.169: 0.0616% ( 6) 00:08:01.579 4789.169 - 4814.375: 0.1288% ( 12) 00:08:01.579 4814.375 - 4839.582: 0.2184% ( 16) 00:08:01.579 4839.582 - 4864.788: 0.2744% ( 10) 00:08:01.579 4864.788 - 4889.994: 0.2856% ( 2) 00:08:01.579 4889.994 - 4915.200: 0.2968% ( 2) 00:08:01.579 4915.200 - 4940.406: 0.3080% ( 2) 00:08:01.579 4940.406 - 4965.612: 0.3192% ( 2) 00:08:01.579 4965.612 - 4990.818: 0.3304% ( 2) 00:08:01.579 4990.818 - 5016.025: 0.3416% ( 2) 00:08:01.579 5016.025 - 5041.231: 0.3528% ( 2) 00:08:01.579 5041.231 - 5066.437: 0.3584% ( 1) 00:08:01.579 5696.591 - 5721.797: 0.3640% ( 1) 00:08:01.579 5797.415 - 5822.622: 0.3696% ( 1) 00:08:01.579 5822.622 - 5847.828: 0.3752% ( 1) 00:08:01.579 5873.034 - 5898.240: 0.3808% ( 1) 00:08:01.579 5898.240 - 5923.446: 0.3864% ( 1) 00:08:01.579 5923.446 - 5948.652: 0.3976% ( 2) 00:08:01.579 5948.652 - 5973.858: 0.4088% ( 2) 00:08:01.579 5973.858 - 5999.065: 0.4424% ( 6) 00:08:01.579 5999.065 - 6024.271: 0.4872% ( 8) 00:08:01.579 6024.271 - 6049.477: 0.5376% ( 9) 00:08:01.579 6049.477 - 6074.683: 0.6104% ( 13) 00:08:01.579 6074.683 - 6099.889: 0.7056% ( 17) 00:08:01.579 6099.889 - 6125.095: 1.0193% ( 56) 00:08:01.579 6125.095 - 6150.302: 1.1649% ( 26) 00:08:01.579 6150.302 - 6175.508: 1.2937% ( 23) 00:08:01.579 6175.508 - 6200.714: 1.5121% ( 39) 00:08:01.579 6200.714 - 6225.920: 1.8089% ( 53) 00:08:01.579 6225.920 - 6251.126: 2.1897% ( 68) 00:08:01.579 6251.126 - 6276.332: 2.5818% ( 70) 00:08:01.579 6276.332 - 6301.538: 3.1026% ( 93) 00:08:01.579 6301.538 - 6326.745: 3.9875% ( 158) 00:08:01.579 6326.745 - 6351.951: 4.9787% ( 177) 00:08:01.579 6351.951 - 6377.157: 6.2332% ( 224) 00:08:01.579 6377.157 - 6402.363: 7.4933% ( 225) 00:08:01.579 6402.363 - 6427.569: 8.4173% ( 165) 00:08:01.579 6427.569 - 6452.775: 9.5430% ( 201) 00:08:01.579 6452.775 - 6503.188: 13.2280% ( 658) 00:08:01.579 6503.188 - 6553.600: 18.4924% ( 940) 00:08:01.579 6553.600 - 6604.012: 22.9335% ( 793) 00:08:01.579 6604.012 - 6654.425: 27.2905% ( 778) 00:08:01.579 6654.425 - 6704.837: 32.3029% ( 895) 00:08:01.579 6704.837 - 6755.249: 39.1857% ( 1229) 00:08:01.579 6755.249 - 6805.662: 45.0661% ( 1050) 00:08:01.579 6805.662 - 6856.074: 50.6552% ( 998) 00:08:01.579 6856.074 - 6906.486: 55.8524% ( 928) 00:08:01.579 6906.486 - 6956.898: 60.7303% ( 871) 00:08:01.579 6956.898 - 7007.311: 64.7065% ( 710) 00:08:01.579 7007.311 - 7057.723: 68.9180% ( 752) 00:08:01.579 7057.723 - 7108.135: 72.3902% ( 620) 00:08:01.579 7108.135 - 7158.548: 76.2433% ( 688) 00:08:01.579 7158.548 - 7208.960: 79.4579% ( 574) 00:08:01.579 7208.960 - 7259.372: 81.1996% ( 311) 00:08:01.579 7259.372 - 7309.785: 82.8349% ( 292) 00:08:01.579 7309.785 - 7360.197: 84.1958% ( 243) 00:08:01.579 7360.197 - 7410.609: 84.9910% ( 142) 00:08:01.579 7410.609 - 7461.022: 85.7079% ( 128) 00:08:01.579 7461.022 - 7511.434: 86.3743% ( 119) 00:08:01.579 7511.434 - 7561.846: 86.8840% ( 91) 00:08:01.579 7561.846 - 7612.258: 87.5896% ( 126) 00:08:01.579 7612.258 - 7662.671: 88.1552% ( 101) 00:08:01.579 7662.671 - 7713.083: 88.6537% ( 89) 00:08:01.579 7713.083 - 7763.495: 89.2585% ( 108) 00:08:01.579 7763.495 - 7813.908: 89.7905% ( 95) 00:08:01.579 7813.908 - 7864.320: 90.4290% ( 114) 00:08:01.579 7864.320 - 7914.732: 90.9386% ( 91) 00:08:01.579 7914.732 - 7965.145: 91.4539% ( 92) 00:08:01.579 7965.145 - 8015.557: 91.9243% ( 84) 00:08:01.579 8015.557 - 8065.969: 92.1763% ( 45) 00:08:01.579 8065.969 - 8116.382: 92.2939% ( 21) 00:08:01.579 8116.382 - 8166.794: 92.4339% ( 25) 00:08:01.579 8166.794 - 8217.206: 92.6635% ( 41) 00:08:01.579 8217.206 - 8267.618: 92.8315% ( 30) 00:08:01.579 8267.618 - 8318.031: 92.9884% ( 28) 00:08:01.579 8318.031 - 8368.443: 93.1620% ( 31) 00:08:01.579 8368.443 - 8418.855: 93.2796% ( 21) 00:08:01.579 8418.855 - 8469.268: 93.4196% ( 25) 00:08:01.579 8469.268 - 8519.680: 93.5988% ( 32) 00:08:01.579 8519.680 - 8570.092: 93.8508% ( 45) 00:08:01.579 8570.092 - 8620.505: 94.0916% ( 43) 00:08:01.579 8620.505 - 8670.917: 94.3436% ( 45) 00:08:01.579 8670.917 - 8721.329: 94.4892% ( 26) 00:08:01.579 8721.329 - 8771.742: 94.6013% ( 20) 00:08:01.579 8771.742 - 8822.154: 94.7357% ( 24) 00:08:01.579 8822.154 - 8872.566: 94.9933% ( 46) 00:08:01.579 8872.566 - 8922.978: 95.1389% ( 26) 00:08:01.579 8922.978 - 8973.391: 95.4525% ( 56) 00:08:01.579 8973.391 - 9023.803: 95.5925% ( 25) 00:08:01.579 9023.803 - 9074.215: 95.7549% ( 29) 00:08:01.579 9074.215 - 9124.628: 95.9173% ( 29) 00:08:01.579 9124.628 - 9175.040: 96.1022% ( 33) 00:08:01.579 9175.040 - 9225.452: 96.1638% ( 11) 00:08:01.579 9225.452 - 9275.865: 96.2310% ( 12) 00:08:01.579 9275.865 - 9326.277: 96.2814% ( 9) 00:08:01.579 9326.277 - 9376.689: 96.3094% ( 5) 00:08:01.579 9527.926 - 9578.338: 96.3598% ( 9) 00:08:01.579 9578.338 - 9628.751: 96.4270% ( 12) 00:08:01.579 9628.751 - 9679.163: 96.5222% ( 17) 00:08:01.579 9679.163 - 9729.575: 96.6118% ( 16) 00:08:01.579 9729.575 - 9779.988: 96.6454% ( 6) 00:08:01.579 9779.988 - 9830.400: 96.6846% ( 7) 00:08:01.579 9830.400 - 9880.812: 96.7238% ( 7) 00:08:01.579 9880.812 - 9931.225: 96.7854% ( 11) 00:08:01.579 9931.225 - 9981.637: 96.8470% ( 11) 00:08:01.579 9981.637 - 10032.049: 96.9030% ( 10) 00:08:01.579 10032.049 - 10082.462: 96.9590% ( 10) 00:08:01.579 10082.462 - 10132.874: 97.0038% ( 8) 00:08:01.579 10132.874 - 10183.286: 97.0654% ( 11) 00:08:01.579 10183.286 - 10233.698: 97.1214% ( 10) 00:08:01.579 10233.698 - 10284.111: 97.1830% ( 11) 00:08:01.579 10284.111 - 10334.523: 97.2222% ( 7) 00:08:01.579 10334.523 - 10384.935: 97.2726% ( 9) 00:08:01.579 10384.935 - 10435.348: 97.3230% ( 9) 00:08:01.579 10435.348 - 10485.760: 97.3622% ( 7) 00:08:01.579 10485.760 - 10536.172: 97.3958% ( 6) 00:08:01.579 10536.172 - 10586.585: 97.4238% ( 5) 00:08:01.579 10586.585 - 10636.997: 97.4462% ( 4) 00:08:01.579 10636.997 - 10687.409: 97.5078% ( 11) 00:08:01.579 10687.409 - 10737.822: 97.5806% ( 13) 00:08:01.579 10737.822 - 10788.234: 97.6366% ( 10) 00:08:01.579 10788.234 - 10838.646: 97.7039% ( 12) 00:08:01.579 10838.646 - 10889.058: 97.7207% ( 3) 00:08:01.579 10889.058 - 10939.471: 97.7263% ( 1) 00:08:01.579 10939.471 - 10989.883: 97.7319% ( 1) 00:08:01.579 10989.883 - 11040.295: 97.7375% ( 1) 00:08:01.580 11040.295 - 11090.708: 97.7431% ( 1) 00:08:01.580 11090.708 - 11141.120: 97.7599% ( 3) 00:08:01.580 11141.120 - 11191.532: 97.7879% ( 5) 00:08:01.580 11191.532 - 11241.945: 97.7991% ( 2) 00:08:01.580 11241.945 - 11292.357: 97.8271% ( 5) 00:08:01.580 11292.357 - 11342.769: 97.8495% ( 4) 00:08:01.580 11393.182 - 11443.594: 97.8775% ( 5) 00:08:01.580 11443.594 - 11494.006: 97.9335% ( 10) 00:08:01.580 11494.006 - 11544.418: 97.9951% ( 11) 00:08:01.580 11544.418 - 11594.831: 98.1239% ( 23) 00:08:01.580 11594.831 - 11645.243: 98.1631% ( 7) 00:08:01.580 11645.243 - 11695.655: 98.1799% ( 3) 00:08:01.580 11695.655 - 11746.068: 98.2023% ( 4) 00:08:01.580 11746.068 - 11796.480: 98.2359% ( 6) 00:08:01.580 11796.480 - 11846.892: 98.2583% ( 4) 00:08:01.580 11846.892 - 11897.305: 98.2695% ( 2) 00:08:01.580 11897.305 - 11947.717: 98.2863% ( 3) 00:08:01.580 11947.717 - 11998.129: 98.3031% ( 3) 00:08:01.580 11998.129 - 12048.542: 98.3199% ( 3) 00:08:01.580 12048.542 - 12098.954: 98.3367% ( 3) 00:08:01.580 12098.954 - 12149.366: 98.3479% ( 2) 00:08:01.580 12149.366 - 12199.778: 98.3591% ( 2) 00:08:01.580 12199.778 - 12250.191: 98.3647% ( 1) 00:08:01.580 12250.191 - 12300.603: 98.3815% ( 3) 00:08:01.580 12300.603 - 12351.015: 98.3927% ( 2) 00:08:01.580 12351.015 - 12401.428: 98.4039% ( 2) 00:08:01.580 12401.428 - 12451.840: 98.4151% ( 2) 00:08:01.580 12451.840 - 12502.252: 98.4375% ( 4) 00:08:01.580 12502.252 - 12552.665: 98.4599% ( 4) 00:08:01.580 12552.665 - 12603.077: 98.5159% ( 10) 00:08:01.580 12603.077 - 12653.489: 98.6447% ( 23) 00:08:01.580 12653.489 - 12703.902: 98.6671% ( 4) 00:08:01.580 12703.902 - 12754.314: 98.6783% ( 2) 00:08:01.580 12754.314 - 12804.726: 98.6839% ( 1) 00:08:01.580 13812.972 - 13913.797: 98.7007% ( 3) 00:08:01.580 13913.797 - 14014.622: 98.7791% ( 14) 00:08:01.580 14014.622 - 14115.446: 98.8127% ( 6) 00:08:01.580 14115.446 - 14216.271: 98.8295% ( 3) 00:08:01.580 14216.271 - 14317.095: 98.8519% ( 4) 00:08:01.580 14317.095 - 14417.920: 98.8687% ( 3) 00:08:01.580 14417.920 - 14518.745: 98.8911% ( 4) 00:08:01.580 14518.745 - 14619.569: 98.9079% ( 3) 00:08:01.580 14619.569 - 14720.394: 98.9247% ( 3) 00:08:01.580 15325.342 - 15426.166: 98.9415% ( 3) 00:08:01.580 15426.166 - 15526.991: 98.9695% ( 5) 00:08:01.580 15526.991 - 15627.815: 98.9919% ( 4) 00:08:01.580 15627.815 - 15728.640: 99.0143% ( 4) 00:08:01.580 15728.640 - 15829.465: 99.0423% ( 5) 00:08:01.580 15829.465 - 15930.289: 99.0647% ( 4) 00:08:01.580 15930.289 - 16031.114: 99.0983% ( 6) 00:08:01.580 16031.114 - 16131.938: 99.1543% ( 10) 00:08:01.580 16131.938 - 16232.763: 99.2159% ( 11) 00:08:01.580 16232.763 - 16333.588: 99.2608% ( 8) 00:08:01.580 16333.588 - 16434.412: 99.2832% ( 4) 00:08:01.580 16636.062 - 16736.886: 99.3224% ( 7) 00:08:01.580 16736.886 - 16837.711: 99.3728% ( 9) 00:08:01.580 16837.711 - 16938.535: 99.4176% ( 8) 00:08:01.580 16938.535 - 17039.360: 99.4624% ( 8) 00:08:01.580 17039.360 - 17140.185: 99.5016% ( 7) 00:08:01.580 17140.185 - 17241.009: 99.5408% ( 7) 00:08:01.580 17241.009 - 17341.834: 99.5632% ( 4) 00:08:01.580 17341.834 - 17442.658: 99.5912% ( 5) 00:08:01.580 17442.658 - 17543.483: 99.6136% ( 4) 00:08:01.580 17543.483 - 17644.308: 99.6304% ( 3) 00:08:01.580 17644.308 - 17745.132: 99.6416% ( 2) 00:08:01.580 21072.345 - 21173.169: 99.6920% ( 9) 00:08:01.580 21173.169 - 21273.994: 99.8992% ( 37) 00:08:01.580 21273.994 - 21374.818: 99.9272% ( 5) 00:08:01.580 21374.818 - 21475.643: 99.9496% ( 4) 00:08:01.580 21475.643 - 21576.468: 99.9608% ( 2) 00:08:01.580 21576.468 - 21677.292: 99.9720% ( 2) 00:08:01.580 21677.292 - 21778.117: 100.0000% ( 5) 00:08:01.580 00:08:01.580 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:01.580 ============================================================================== 00:08:01.580 Range in us Cumulative IO count 00:08:01.580 4007.778 - 4032.985: 0.0056% ( 1) 00:08:01.580 4159.015 - 4184.222: 0.0112% ( 1) 00:08:01.580 4184.222 - 4209.428: 0.0336% ( 4) 00:08:01.580 4209.428 - 4234.634: 0.0392% ( 1) 00:08:01.580 4234.634 - 4259.840: 0.0560% ( 3) 00:08:01.580 4259.840 - 4285.046: 0.0728% ( 3) 00:08:01.580 4285.046 - 4310.252: 0.1400% ( 12) 00:08:01.580 4310.252 - 4335.458: 0.2128% ( 13) 00:08:01.580 4335.458 - 4360.665: 0.2688% ( 10) 00:08:01.580 4360.665 - 4385.871: 0.2912% ( 4) 00:08:01.580 4385.871 - 4411.077: 0.3024% ( 2) 00:08:01.580 4411.077 - 4436.283: 0.3136% ( 2) 00:08:01.580 4436.283 - 4461.489: 0.3248% ( 2) 00:08:01.580 4461.489 - 4486.695: 0.3416% ( 3) 00:08:01.580 4486.695 - 4511.902: 0.3528% ( 2) 00:08:01.580 4511.902 - 4537.108: 0.3584% ( 1) 00:08:01.580 5797.415 - 5822.622: 0.3696% ( 2) 00:08:01.580 5822.622 - 5847.828: 0.3920% ( 4) 00:08:01.580 5847.828 - 5873.034: 0.4144% ( 4) 00:08:01.580 5873.034 - 5898.240: 0.4256% ( 2) 00:08:01.580 5898.240 - 5923.446: 0.4536% ( 5) 00:08:01.580 5923.446 - 5948.652: 0.4928% ( 7) 00:08:01.580 5948.652 - 5973.858: 0.5096% ( 3) 00:08:01.580 5973.858 - 5999.065: 0.5376% ( 5) 00:08:01.580 5999.065 - 6024.271: 0.5824% ( 8) 00:08:01.580 6024.271 - 6049.477: 0.6608% ( 14) 00:08:01.580 6049.477 - 6074.683: 0.8513% ( 34) 00:08:01.580 6074.683 - 6099.889: 0.9857% ( 24) 00:08:01.580 6099.889 - 6125.095: 1.1145% ( 23) 00:08:01.580 6125.095 - 6150.302: 1.2713% ( 28) 00:08:01.580 6150.302 - 6175.508: 1.5681% ( 53) 00:08:01.580 6175.508 - 6200.714: 1.9769% ( 73) 00:08:01.580 6200.714 - 6225.920: 2.3017% ( 58) 00:08:01.580 6225.920 - 6251.126: 2.6602% ( 64) 00:08:01.580 6251.126 - 6276.332: 3.2090% ( 98) 00:08:01.580 6276.332 - 6301.538: 3.6906% ( 86) 00:08:01.580 6301.538 - 6326.745: 4.2899% ( 107) 00:08:01.580 6326.745 - 6351.951: 5.1579% ( 155) 00:08:01.580 6351.951 - 6377.157: 6.1324% ( 174) 00:08:01.580 6377.157 - 6402.363: 7.1461% ( 181) 00:08:01.580 6402.363 - 6427.569: 8.2941% ( 205) 00:08:01.580 6427.569 - 6452.775: 9.4142% ( 200) 00:08:01.580 6452.775 - 6503.188: 13.0600% ( 651) 00:08:01.580 6503.188 - 6553.600: 17.1427% ( 729) 00:08:01.580 6553.600 - 6604.012: 21.5726% ( 791) 00:08:01.580 6604.012 - 6654.425: 26.2769% ( 840) 00:08:01.580 6654.425 - 6704.837: 32.0733% ( 1035) 00:08:01.580 6704.837 - 6755.249: 38.0152% ( 1061) 00:08:01.580 6755.249 - 6805.662: 44.3212% ( 1126) 00:08:01.580 6805.662 - 6856.074: 50.0616% ( 1025) 00:08:01.580 6856.074 - 6906.486: 56.7428% ( 1193) 00:08:01.580 6906.486 - 6956.898: 61.8336% ( 909) 00:08:01.580 6956.898 - 7007.311: 65.7146% ( 693) 00:08:01.580 7007.311 - 7057.723: 69.1812% ( 619) 00:08:01.580 7057.723 - 7108.135: 72.7319% ( 634) 00:08:01.580 7108.135 - 7158.548: 76.2769% ( 633) 00:08:01.581 7158.548 - 7208.960: 79.0155% ( 489) 00:08:01.581 7208.960 - 7259.372: 81.1324% ( 378) 00:08:01.581 7259.372 - 7309.785: 82.9805% ( 330) 00:08:01.581 7309.785 - 7360.197: 84.1006% ( 200) 00:08:01.581 7360.197 - 7410.609: 85.3887% ( 230) 00:08:01.581 7410.609 - 7461.022: 86.2175% ( 148) 00:08:01.581 7461.022 - 7511.434: 86.9512% ( 131) 00:08:01.581 7511.434 - 7561.846: 87.3264% ( 67) 00:08:01.581 7561.846 - 7612.258: 87.8136% ( 87) 00:08:01.581 7612.258 - 7662.671: 88.3401% ( 94) 00:08:01.581 7662.671 - 7713.083: 88.9841% ( 115) 00:08:01.581 7713.083 - 7763.495: 89.4377% ( 81) 00:08:01.581 7763.495 - 7813.908: 90.3338% ( 160) 00:08:01.581 7813.908 - 7864.320: 90.9386% ( 108) 00:08:01.581 7864.320 - 7914.732: 91.3810% ( 79) 00:08:01.581 7914.732 - 7965.145: 91.7843% ( 72) 00:08:01.581 7965.145 - 8015.557: 92.0419% ( 46) 00:08:01.581 8015.557 - 8065.969: 92.2659% ( 40) 00:08:01.581 8065.969 - 8116.382: 92.6019% ( 60) 00:08:01.581 8116.382 - 8166.794: 92.7979% ( 35) 00:08:01.581 8166.794 - 8217.206: 92.9828% ( 33) 00:08:01.581 8217.206 - 8267.618: 93.1284% ( 26) 00:08:01.581 8267.618 - 8318.031: 93.2348% ( 19) 00:08:01.581 8318.031 - 8368.443: 93.5428% ( 55) 00:08:01.581 8368.443 - 8418.855: 93.7724% ( 41) 00:08:01.581 8418.855 - 8469.268: 93.8620% ( 16) 00:08:01.581 8469.268 - 8519.680: 93.9236% ( 11) 00:08:01.581 8519.680 - 8570.092: 93.9628% ( 7) 00:08:01.581 8570.092 - 8620.505: 94.0636% ( 18) 00:08:01.581 8620.505 - 8670.917: 94.2372% ( 31) 00:08:01.581 8670.917 - 8721.329: 94.2652% ( 5) 00:08:01.581 8721.329 - 8771.742: 94.3156% ( 9) 00:08:01.581 8771.742 - 8822.154: 94.3716% ( 10) 00:08:01.581 8822.154 - 8872.566: 94.4668% ( 17) 00:08:01.581 8872.566 - 8922.978: 94.5733% ( 19) 00:08:01.581 8922.978 - 8973.391: 94.7525% ( 32) 00:08:01.581 8973.391 - 9023.803: 94.9877% ( 42) 00:08:01.581 9023.803 - 9074.215: 95.2733% ( 51) 00:08:01.581 9074.215 - 9124.628: 95.4357% ( 29) 00:08:01.581 9124.628 - 9175.040: 95.5029% ( 12) 00:08:01.581 9175.040 - 9225.452: 95.5589% ( 10) 00:08:01.581 9225.452 - 9275.865: 95.6149% ( 10) 00:08:01.581 9275.865 - 9326.277: 95.6653% ( 9) 00:08:01.581 9326.277 - 9376.689: 95.7213% ( 10) 00:08:01.581 9376.689 - 9427.102: 95.7885% ( 12) 00:08:01.581 9427.102 - 9477.514: 95.8333% ( 8) 00:08:01.581 9477.514 - 9527.926: 96.0069% ( 31) 00:08:01.581 9527.926 - 9578.338: 96.1750% ( 30) 00:08:01.581 9578.338 - 9628.751: 96.3822% ( 37) 00:08:01.581 9628.751 - 9679.163: 96.6902% ( 55) 00:08:01.581 9679.163 - 9729.575: 96.8022% ( 20) 00:08:01.581 9729.575 - 9779.988: 96.8750% ( 13) 00:08:01.581 9779.988 - 9830.400: 96.9198% ( 8) 00:08:01.581 9830.400 - 9880.812: 96.9646% ( 8) 00:08:01.581 9880.812 - 9931.225: 96.9982% ( 6) 00:08:01.581 9931.225 - 9981.637: 97.0262% ( 5) 00:08:01.581 9981.637 - 10032.049: 97.0598% ( 6) 00:08:01.581 10032.049 - 10082.462: 97.0990% ( 7) 00:08:01.581 10082.462 - 10132.874: 97.1326% ( 6) 00:08:01.581 10132.874 - 10183.286: 97.1550% ( 4) 00:08:01.581 10183.286 - 10233.698: 97.1830% ( 5) 00:08:01.581 10233.698 - 10284.111: 97.2110% ( 5) 00:08:01.581 10284.111 - 10334.523: 97.2278% ( 3) 00:08:01.581 10334.523 - 10384.935: 97.3678% ( 25) 00:08:01.581 10384.935 - 10435.348: 97.3902% ( 4) 00:08:01.581 10435.348 - 10485.760: 97.4070% ( 3) 00:08:01.581 10485.760 - 10536.172: 97.4182% ( 2) 00:08:01.581 10536.172 - 10586.585: 97.4238% ( 1) 00:08:01.581 10586.585 - 10636.997: 97.4350% ( 2) 00:08:01.581 10636.997 - 10687.409: 97.4462% ( 2) 00:08:01.581 10687.409 - 10737.822: 97.4574% ( 2) 00:08:01.581 10737.822 - 10788.234: 97.4630% ( 1) 00:08:01.581 10788.234 - 10838.646: 97.4742% ( 2) 00:08:01.581 10838.646 - 10889.058: 97.4910% ( 3) 00:08:01.581 10889.058 - 10939.471: 97.5078% ( 3) 00:08:01.581 10939.471 - 10989.883: 97.5302% ( 4) 00:08:01.581 10989.883 - 11040.295: 97.5358% ( 1) 00:08:01.581 11040.295 - 11090.708: 97.5526% ( 3) 00:08:01.581 11090.708 - 11141.120: 97.5638% ( 2) 00:08:01.581 11141.120 - 11191.532: 97.5750% ( 2) 00:08:01.581 11191.532 - 11241.945: 97.5918% ( 3) 00:08:01.581 11241.945 - 11292.357: 97.5974% ( 1) 00:08:01.581 11292.357 - 11342.769: 97.6142% ( 3) 00:08:01.581 11342.769 - 11393.182: 97.6198% ( 1) 00:08:01.581 11393.182 - 11443.594: 97.6534% ( 6) 00:08:01.581 11443.594 - 11494.006: 97.6759% ( 4) 00:08:01.581 11494.006 - 11544.418: 97.7263% ( 9) 00:08:01.581 11544.418 - 11594.831: 97.7543% ( 5) 00:08:01.581 11594.831 - 11645.243: 97.8047% ( 9) 00:08:01.581 11645.243 - 11695.655: 97.8775% ( 13) 00:08:01.581 11695.655 - 11746.068: 98.0343% ( 28) 00:08:01.581 11746.068 - 11796.480: 98.1295% ( 17) 00:08:01.581 11796.480 - 11846.892: 98.2023% ( 13) 00:08:01.581 11846.892 - 11897.305: 98.2751% ( 13) 00:08:01.581 11897.305 - 11947.717: 98.3199% ( 8) 00:08:01.581 11947.717 - 11998.129: 98.3591% ( 7) 00:08:01.581 11998.129 - 12048.542: 98.3871% ( 5) 00:08:01.581 12048.542 - 12098.954: 98.4095% ( 4) 00:08:01.581 12098.954 - 12149.366: 98.4543% ( 8) 00:08:01.581 12149.366 - 12199.778: 98.5159% ( 11) 00:08:01.581 12199.778 - 12250.191: 98.5551% ( 7) 00:08:01.581 12250.191 - 12300.603: 98.7175% ( 29) 00:08:01.581 12300.603 - 12351.015: 98.7791% ( 11) 00:08:01.581 12351.015 - 12401.428: 98.8295% ( 9) 00:08:01.581 12401.428 - 12451.840: 98.8631% ( 6) 00:08:01.581 12451.840 - 12502.252: 98.8743% ( 2) 00:08:01.581 12502.252 - 12552.665: 98.8967% ( 4) 00:08:01.581 12552.665 - 12603.077: 98.9135% ( 3) 00:08:01.581 12603.077 - 12653.489: 98.9247% ( 2) 00:08:01.581 15627.815 - 15728.640: 98.9303% ( 1) 00:08:01.581 15728.640 - 15829.465: 98.9359% ( 1) 00:08:01.581 15930.289 - 16031.114: 98.9807% ( 8) 00:08:01.581 16031.114 - 16131.938: 99.2103% ( 41) 00:08:01.581 16131.938 - 16232.763: 99.2440% ( 6) 00:08:01.581 16232.763 - 16333.588: 99.2832% ( 7) 00:08:01.581 16535.237 - 16636.062: 99.3112% ( 5) 00:08:01.581 16636.062 - 16736.886: 99.3616% ( 9) 00:08:01.581 16736.886 - 16837.711: 99.4008% ( 7) 00:08:01.581 16837.711 - 16938.535: 99.4400% ( 7) 00:08:01.581 16938.535 - 17039.360: 99.4848% ( 8) 00:08:01.581 17039.360 - 17140.185: 99.5184% ( 6) 00:08:01.581 17140.185 - 17241.009: 99.5464% ( 5) 00:08:01.581 17241.009 - 17341.834: 99.5688% ( 4) 00:08:01.581 17341.834 - 17442.658: 99.5968% ( 5) 00:08:01.581 17442.658 - 17543.483: 99.6192% ( 4) 00:08:01.581 17543.483 - 17644.308: 99.6416% ( 4) 00:08:01.581 20467.397 - 20568.222: 99.6640% ( 4) 00:08:01.581 20568.222 - 20669.046: 99.6976% ( 6) 00:08:01.581 20669.046 - 20769.871: 99.7256% ( 5) 00:08:01.581 20769.871 - 20870.695: 99.7536% ( 5) 00:08:01.581 20870.695 - 20971.520: 99.8152% ( 11) 00:08:01.581 20971.520 - 21072.345: 99.8488% ( 6) 00:08:01.581 21072.345 - 21173.169: 99.8768% ( 5) 00:08:01.581 21173.169 - 21273.994: 99.9048% ( 5) 00:08:01.581 21273.994 - 21374.818: 99.9328% ( 5) 00:08:01.581 21374.818 - 21475.643: 99.9832% ( 9) 00:08:01.581 21475.643 - 21576.468: 99.9944% ( 2) 00:08:01.581 21979.766 - 22080.591: 100.0000% ( 1) 00:08:01.581 00:08:01.581 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:01.581 ============================================================================== 00:08:01.581 Range in us Cumulative IO count 00:08:01.581 3982.572 - 4007.778: 0.0112% ( 2) 00:08:01.581 4007.778 - 4032.985: 0.0672% ( 10) 00:08:01.581 4032.985 - 4058.191: 0.1400% ( 13) 00:08:01.581 4058.191 - 4083.397: 0.2240% ( 15) 00:08:01.581 4083.397 - 4108.603: 0.2632% ( 7) 00:08:01.581 4108.603 - 4133.809: 0.2744% ( 2) 00:08:01.581 4133.809 - 4159.015: 0.2856% ( 2) 00:08:01.581 4159.015 - 4184.222: 0.2968% ( 2) 00:08:01.581 4184.222 - 4209.428: 0.3080% ( 2) 00:08:01.581 4209.428 - 4234.634: 0.3248% ( 3) 00:08:01.581 4234.634 - 4259.840: 0.3360% ( 2) 00:08:01.581 4259.840 - 4285.046: 0.3472% ( 2) 00:08:01.581 4285.046 - 4310.252: 0.3584% ( 2) 00:08:01.581 5923.446 - 5948.652: 0.3640% ( 1) 00:08:01.581 5948.652 - 5973.858: 0.3864% ( 4) 00:08:01.581 5973.858 - 5999.065: 0.4088% ( 4) 00:08:01.581 5999.065 - 6024.271: 0.4648% ( 10) 00:08:01.581 6024.271 - 6049.477: 0.5432% ( 14) 00:08:01.581 6049.477 - 6074.683: 0.6496% ( 19) 00:08:01.582 6074.683 - 6099.889: 0.7448% ( 17) 00:08:01.582 6099.889 - 6125.095: 0.8737% ( 23) 00:08:01.582 6125.095 - 6150.302: 1.1537% ( 50) 00:08:01.582 6150.302 - 6175.508: 1.4505% ( 53) 00:08:01.582 6175.508 - 6200.714: 1.6465% ( 35) 00:08:01.582 6200.714 - 6225.920: 1.9321% ( 51) 00:08:01.582 6225.920 - 6251.126: 2.3241% ( 70) 00:08:01.582 6251.126 - 6276.332: 2.6658% ( 61) 00:08:01.582 6276.332 - 6301.538: 3.1250% ( 82) 00:08:01.582 6301.538 - 6326.745: 3.8810% ( 135) 00:08:01.582 6326.745 - 6351.951: 4.7211% ( 150) 00:08:01.582 6351.951 - 6377.157: 5.5556% ( 149) 00:08:01.582 6377.157 - 6402.363: 6.5636% ( 180) 00:08:01.582 6402.363 - 6427.569: 8.0141% ( 259) 00:08:01.582 6427.569 - 6452.775: 9.2462% ( 220) 00:08:01.582 6452.775 - 6503.188: 12.4496% ( 572) 00:08:01.582 6503.188 - 6553.600: 16.7843% ( 774) 00:08:01.582 6553.600 - 6604.012: 21.1582% ( 781) 00:08:01.582 6604.012 - 6654.425: 26.7809% ( 1004) 00:08:01.582 6654.425 - 6704.837: 32.7789% ( 1071) 00:08:01.582 6704.837 - 6755.249: 39.2025% ( 1147) 00:08:01.582 6755.249 - 6805.662: 45.8837% ( 1193) 00:08:01.582 6805.662 - 6856.074: 51.9209% ( 1078) 00:08:01.582 6856.074 - 6906.486: 57.5717% ( 1009) 00:08:01.582 6906.486 - 6956.898: 62.8304% ( 939) 00:08:01.582 6956.898 - 7007.311: 66.4763% ( 651) 00:08:01.582 7007.311 - 7057.723: 69.6573% ( 568) 00:08:01.582 7057.723 - 7108.135: 73.1967% ( 632) 00:08:01.582 7108.135 - 7158.548: 76.0697% ( 513) 00:08:01.582 7158.548 - 7208.960: 78.4666% ( 428) 00:08:01.582 7208.960 - 7259.372: 80.6228% ( 385) 00:08:01.582 7259.372 - 7309.785: 82.1013% ( 264) 00:08:01.582 7309.785 - 7360.197: 83.4285% ( 237) 00:08:01.582 7360.197 - 7410.609: 84.5766% ( 205) 00:08:01.582 7410.609 - 7461.022: 85.6015% ( 183) 00:08:01.582 7461.022 - 7511.434: 86.4975% ( 160) 00:08:01.582 7511.434 - 7561.846: 87.3656% ( 155) 00:08:01.582 7561.846 - 7612.258: 87.9872% ( 111) 00:08:01.582 7612.258 - 7662.671: 88.6873% ( 125) 00:08:01.582 7662.671 - 7713.083: 89.4601% ( 138) 00:08:01.582 7713.083 - 7763.495: 90.1490% ( 123) 00:08:01.582 7763.495 - 7813.908: 90.8490% ( 125) 00:08:01.582 7813.908 - 7864.320: 91.1850% ( 60) 00:08:01.582 7864.320 - 7914.732: 91.6387% ( 81) 00:08:01.582 7914.732 - 7965.145: 91.9523% ( 56) 00:08:01.582 7965.145 - 8015.557: 92.2659% ( 56) 00:08:01.582 8015.557 - 8065.969: 92.5907% ( 58) 00:08:01.582 8065.969 - 8116.382: 92.7979% ( 37) 00:08:01.582 8116.382 - 8166.794: 92.9603% ( 29) 00:08:01.582 8166.794 - 8217.206: 93.0612% ( 18) 00:08:01.582 8217.206 - 8267.618: 93.1564% ( 17) 00:08:01.582 8267.618 - 8318.031: 93.4084% ( 45) 00:08:01.582 8318.031 - 8368.443: 93.5484% ( 25) 00:08:01.582 8368.443 - 8418.855: 93.9012% ( 63) 00:08:01.582 8418.855 - 8469.268: 94.1196% ( 39) 00:08:01.582 8469.268 - 8519.680: 94.2204% ( 18) 00:08:01.582 8519.680 - 8570.092: 94.2876% ( 12) 00:08:01.582 8570.092 - 8620.505: 94.3436% ( 10) 00:08:01.582 8620.505 - 8670.917: 94.3828% ( 7) 00:08:01.582 8670.917 - 8721.329: 94.4332% ( 9) 00:08:01.582 8721.329 - 8771.742: 94.5060% ( 13) 00:08:01.582 8771.742 - 8822.154: 94.6517% ( 26) 00:08:01.582 8822.154 - 8872.566: 94.7749% ( 22) 00:08:01.582 8872.566 - 8922.978: 94.9261% ( 27) 00:08:01.582 8922.978 - 8973.391: 95.2397% ( 56) 00:08:01.582 8973.391 - 9023.803: 95.4413% ( 36) 00:08:01.582 9023.803 - 9074.215: 95.5421% ( 18) 00:08:01.582 9074.215 - 9124.628: 95.6261% ( 15) 00:08:01.582 9124.628 - 9175.040: 95.7213% ( 17) 00:08:01.582 9175.040 - 9225.452: 95.8165% ( 17) 00:08:01.582 9225.452 - 9275.865: 95.9509% ( 24) 00:08:01.582 9275.865 - 9326.277: 96.0966% ( 26) 00:08:01.582 9326.277 - 9376.689: 96.2142% ( 21) 00:08:01.582 9376.689 - 9427.102: 96.2926% ( 14) 00:08:01.582 9427.102 - 9477.514: 96.3318% ( 7) 00:08:01.582 9477.514 - 9527.926: 96.3486% ( 3) 00:08:01.582 9527.926 - 9578.338: 96.3766% ( 5) 00:08:01.582 9578.338 - 9628.751: 96.3878% ( 2) 00:08:01.582 9628.751 - 9679.163: 96.3990% ( 2) 00:08:01.582 9679.163 - 9729.575: 96.4270% ( 5) 00:08:01.582 9729.575 - 9779.988: 96.4438% ( 3) 00:08:01.582 9779.988 - 9830.400: 96.4606% ( 3) 00:08:01.582 9830.400 - 9880.812: 96.4774% ( 3) 00:08:01.582 9931.225 - 9981.637: 96.4886% ( 2) 00:08:01.582 9981.637 - 10032.049: 96.5446% ( 10) 00:08:01.582 10032.049 - 10082.462: 96.6902% ( 26) 00:08:01.582 10082.462 - 10132.874: 96.8638% ( 31) 00:08:01.582 10132.874 - 10183.286: 96.9310% ( 12) 00:08:01.582 10183.286 - 10233.698: 97.1326% ( 36) 00:08:01.582 10233.698 - 10284.111: 97.1886% ( 10) 00:08:01.582 10284.111 - 10334.523: 97.2390% ( 9) 00:08:01.582 10334.523 - 10384.935: 97.3902% ( 27) 00:08:01.582 10384.935 - 10435.348: 97.4014% ( 2) 00:08:01.582 10435.348 - 10485.760: 97.4126% ( 2) 00:08:01.582 10485.760 - 10536.172: 97.4182% ( 1) 00:08:01.582 10536.172 - 10586.585: 97.4294% ( 2) 00:08:01.582 10586.585 - 10636.997: 97.4574% ( 5) 00:08:01.582 10636.997 - 10687.409: 97.4854% ( 5) 00:08:01.582 10687.409 - 10737.822: 97.5190% ( 6) 00:08:01.582 10737.822 - 10788.234: 97.5526% ( 6) 00:08:01.582 10788.234 - 10838.646: 97.5806% ( 5) 00:08:01.582 10838.646 - 10889.058: 97.6142% ( 6) 00:08:01.582 10889.058 - 10939.471: 97.6366% ( 4) 00:08:01.582 10939.471 - 10989.883: 97.6927% ( 10) 00:08:01.582 10989.883 - 11040.295: 97.7263% ( 6) 00:08:01.582 11040.295 - 11090.708: 97.7375% ( 2) 00:08:01.582 11090.708 - 11141.120: 97.7487% ( 2) 00:08:01.582 11141.120 - 11191.532: 97.7599% ( 2) 00:08:01.582 11191.532 - 11241.945: 97.7655% ( 1) 00:08:01.582 11241.945 - 11292.357: 97.7767% ( 2) 00:08:01.582 11292.357 - 11342.769: 97.7823% ( 1) 00:08:01.582 11342.769 - 11393.182: 97.7935% ( 2) 00:08:01.582 11393.182 - 11443.594: 97.8047% ( 2) 00:08:01.582 11443.594 - 11494.006: 97.8159% ( 2) 00:08:01.582 11494.006 - 11544.418: 97.8383% ( 4) 00:08:01.582 11544.418 - 11594.831: 97.8607% ( 4) 00:08:01.582 11594.831 - 11645.243: 97.8943% ( 6) 00:08:01.582 11645.243 - 11695.655: 97.9223% ( 5) 00:08:01.582 11695.655 - 11746.068: 97.9559% ( 6) 00:08:01.583 11746.068 - 11796.480: 97.9839% ( 5) 00:08:01.583 11796.480 - 11846.892: 98.0175% ( 6) 00:08:01.583 11846.892 - 11897.305: 98.0511% ( 6) 00:08:01.583 11897.305 - 11947.717: 98.0791% ( 5) 00:08:01.583 11947.717 - 11998.129: 98.1071% ( 5) 00:08:01.583 11998.129 - 12048.542: 98.1407% ( 6) 00:08:01.583 12048.542 - 12098.954: 98.1911% ( 9) 00:08:01.583 12098.954 - 12149.366: 98.2527% ( 11) 00:08:01.583 12149.366 - 12199.778: 98.3199% ( 12) 00:08:01.583 12199.778 - 12250.191: 98.5103% ( 34) 00:08:01.583 12250.191 - 12300.603: 98.5887% ( 14) 00:08:01.583 12300.603 - 12351.015: 98.6335% ( 8) 00:08:01.583 12351.015 - 12401.428: 98.7007% ( 12) 00:08:01.583 12401.428 - 12451.840: 98.7679% ( 12) 00:08:01.583 12451.840 - 12502.252: 98.8127% ( 8) 00:08:01.583 12502.252 - 12552.665: 98.8463% ( 6) 00:08:01.583 12552.665 - 12603.077: 98.8743% ( 5) 00:08:01.583 12603.077 - 12653.489: 98.8967% ( 4) 00:08:01.583 12653.489 - 12703.902: 98.9135% ( 3) 00:08:01.583 12703.902 - 12754.314: 98.9247% ( 2) 00:08:01.583 15526.991 - 15627.815: 98.9303% ( 1) 00:08:01.583 15627.815 - 15728.640: 98.9359% ( 1) 00:08:01.583 15728.640 - 15829.465: 98.9415% ( 1) 00:08:01.583 16031.114 - 16131.938: 98.9863% ( 8) 00:08:01.583 16131.938 - 16232.763: 99.0311% ( 8) 00:08:01.583 16232.763 - 16333.588: 99.1095% ( 14) 00:08:01.583 16333.588 - 16434.412: 99.4064% ( 53) 00:08:01.583 16434.412 - 16535.237: 99.4848% ( 14) 00:08:01.583 16535.237 - 16636.062: 99.5352% ( 9) 00:08:01.583 16636.062 - 16736.886: 99.5632% ( 5) 00:08:01.583 16736.886 - 16837.711: 99.5912% ( 5) 00:08:01.583 16837.711 - 16938.535: 99.6192% ( 5) 00:08:01.583 16938.535 - 17039.360: 99.6416% ( 4) 00:08:01.583 20265.748 - 20366.572: 99.6528% ( 2) 00:08:01.583 20366.572 - 20467.397: 99.6640% ( 2) 00:08:01.583 20467.397 - 20568.222: 99.6808% ( 3) 00:08:01.583 20568.222 - 20669.046: 99.6976% ( 3) 00:08:01.583 20669.046 - 20769.871: 99.8432% ( 26) 00:08:01.583 20769.871 - 20870.695: 99.8488% ( 1) 00:08:01.583 20870.695 - 20971.520: 99.8712% ( 4) 00:08:01.583 20971.520 - 21072.345: 99.8936% ( 4) 00:08:01.583 21072.345 - 21173.169: 99.9160% ( 4) 00:08:01.583 21173.169 - 21273.994: 99.9384% ( 4) 00:08:01.583 21273.994 - 21374.818: 99.9608% ( 4) 00:08:01.583 21374.818 - 21475.643: 99.9832% ( 4) 00:08:01.583 21475.643 - 21576.468: 100.0000% ( 3) 00:08:01.583 00:08:01.583 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:01.583 ============================================================================== 00:08:01.583 Range in us Cumulative IO count 00:08:01.583 3755.717 - 3780.923: 0.0056% ( 1) 00:08:01.583 3780.923 - 3806.129: 0.0168% ( 2) 00:08:01.583 3806.129 - 3831.335: 0.0448% ( 5) 00:08:01.583 3831.335 - 3856.542: 0.0896% ( 8) 00:08:01.583 3856.542 - 3881.748: 0.1960% ( 19) 00:08:01.583 3881.748 - 3906.954: 0.2520% ( 10) 00:08:01.583 3906.954 - 3932.160: 0.2688% ( 3) 00:08:01.583 3932.160 - 3957.366: 0.2800% ( 2) 00:08:01.583 3957.366 - 3982.572: 0.2912% ( 2) 00:08:01.583 3982.572 - 4007.778: 0.3024% ( 2) 00:08:01.583 4007.778 - 4032.985: 0.3136% ( 2) 00:08:01.583 4032.985 - 4058.191: 0.3304% ( 3) 00:08:01.583 4058.191 - 4083.397: 0.3416% ( 2) 00:08:01.583 4083.397 - 4108.603: 0.3528% ( 2) 00:08:01.583 4108.603 - 4133.809: 0.3584% ( 1) 00:08:01.583 5721.797 - 5747.003: 0.3752% ( 3) 00:08:01.583 5772.209 - 5797.415: 0.4088% ( 6) 00:08:01.583 5797.415 - 5822.622: 0.4424% ( 6) 00:08:01.583 5822.622 - 5847.828: 0.4536% ( 2) 00:08:01.583 5847.828 - 5873.034: 0.4648% ( 2) 00:08:01.583 5873.034 - 5898.240: 0.4872% ( 4) 00:08:01.583 5898.240 - 5923.446: 0.5152% ( 5) 00:08:01.583 5923.446 - 5948.652: 0.5376% ( 4) 00:08:01.583 5948.652 - 5973.858: 0.5824% ( 8) 00:08:01.583 5973.858 - 5999.065: 0.6384% ( 10) 00:08:01.583 5999.065 - 6024.271: 0.7224% ( 15) 00:08:01.583 6024.271 - 6049.477: 0.8009% ( 14) 00:08:01.583 6049.477 - 6074.683: 0.9073% ( 19) 00:08:01.583 6074.683 - 6099.889: 1.0697% ( 29) 00:08:01.583 6099.889 - 6125.095: 1.2881% ( 39) 00:08:01.583 6125.095 - 6150.302: 1.5233% ( 42) 00:08:01.583 6150.302 - 6175.508: 1.7249% ( 36) 00:08:01.583 6175.508 - 6200.714: 1.9601% ( 42) 00:08:01.583 6200.714 - 6225.920: 2.2513% ( 52) 00:08:01.583 6225.920 - 6251.126: 2.7610% ( 91) 00:08:01.583 6251.126 - 6276.332: 3.2090% ( 80) 00:08:01.583 6276.332 - 6301.538: 3.7802% ( 102) 00:08:01.583 6301.538 - 6326.745: 4.4747% ( 124) 00:08:01.583 6326.745 - 6351.951: 5.3147% ( 150) 00:08:01.583 6351.951 - 6377.157: 6.2780% ( 172) 00:08:01.583 6377.157 - 6402.363: 7.3477% ( 191) 00:08:01.583 6402.363 - 6427.569: 8.3837% ( 185) 00:08:01.583 6427.569 - 6452.775: 9.4814% ( 196) 00:08:01.583 6452.775 - 6503.188: 12.2648% ( 497) 00:08:01.583 6503.188 - 6553.600: 16.9019% ( 828) 00:08:01.583 6553.600 - 6604.012: 21.2590% ( 778) 00:08:01.583 6604.012 - 6654.425: 26.4505% ( 927) 00:08:01.583 6654.425 - 6704.837: 32.9469% ( 1160) 00:08:01.583 6704.837 - 6755.249: 39.3817% ( 1149) 00:08:01.583 6755.249 - 6805.662: 45.7605% ( 1139) 00:08:01.583 6805.662 - 6856.074: 52.0777% ( 1128) 00:08:01.583 6856.074 - 6906.486: 57.4037% ( 951) 00:08:01.583 6906.486 - 6956.898: 61.3519% ( 705) 00:08:01.583 6956.898 - 7007.311: 65.6194% ( 762) 00:08:01.583 7007.311 - 7057.723: 69.4276% ( 680) 00:08:01.583 7057.723 - 7108.135: 72.9447% ( 628) 00:08:01.583 7108.135 - 7158.548: 76.1705% ( 576) 00:08:01.583 7158.548 - 7208.960: 78.1810% ( 359) 00:08:01.583 7208.960 - 7259.372: 80.2083% ( 362) 00:08:01.583 7259.372 - 7309.785: 81.5748% ( 244) 00:08:01.583 7309.785 - 7360.197: 82.8573% ( 229) 00:08:01.583 7360.197 - 7410.609: 84.1734% ( 235) 00:08:01.583 7410.609 - 7461.022: 85.4223% ( 223) 00:08:01.583 7461.022 - 7511.434: 86.4135% ( 177) 00:08:01.583 7511.434 - 7561.846: 87.2424% ( 148) 00:08:01.583 7561.846 - 7612.258: 88.2112% ( 173) 00:08:01.583 7612.258 - 7662.671: 88.7041% ( 88) 00:08:01.583 7662.671 - 7713.083: 89.2921% ( 105) 00:08:01.583 7713.083 - 7763.495: 89.7289% ( 78) 00:08:01.583 7763.495 - 7813.908: 90.4626% ( 131) 00:08:01.583 7813.908 - 7864.320: 91.1738% ( 127) 00:08:01.583 7864.320 - 7914.732: 91.5435% ( 66) 00:08:01.583 7914.732 - 7965.145: 91.8515% ( 55) 00:08:01.583 7965.145 - 8015.557: 92.2435% ( 70) 00:08:01.583 8015.557 - 8065.969: 92.5515% ( 55) 00:08:01.583 8065.969 - 8116.382: 92.8315% ( 50) 00:08:01.583 8116.382 - 8166.794: 93.0836% ( 45) 00:08:01.583 8166.794 - 8217.206: 93.3020% ( 39) 00:08:01.583 8217.206 - 8267.618: 93.5708% ( 48) 00:08:01.583 8267.618 - 8318.031: 93.8452% ( 49) 00:08:01.583 8318.031 - 8368.443: 93.9908% ( 26) 00:08:01.583 8368.443 - 8418.855: 94.1308% ( 25) 00:08:01.583 8418.855 - 8469.268: 94.3828% ( 45) 00:08:01.583 8469.268 - 8519.680: 94.5060% ( 22) 00:08:01.583 8519.680 - 8570.092: 94.6013% ( 17) 00:08:01.583 8570.092 - 8620.505: 94.6965% ( 17) 00:08:01.583 8620.505 - 8670.917: 94.7469% ( 9) 00:08:01.583 8670.917 - 8721.329: 94.9261% ( 32) 00:08:01.583 8721.329 - 8771.742: 95.1109% ( 33) 00:08:01.583 8771.742 - 8822.154: 95.2565% ( 26) 00:08:01.584 8822.154 - 8872.566: 95.3909% ( 24) 00:08:01.584 8872.566 - 8922.978: 95.5141% ( 22) 00:08:01.584 8922.978 - 8973.391: 95.7325% ( 39) 00:08:01.584 8973.391 - 9023.803: 95.7997% ( 12) 00:08:01.584 9023.803 - 9074.215: 95.8557% ( 10) 00:08:01.584 9074.215 - 9124.628: 95.9117% ( 10) 00:08:01.584 9124.628 - 9175.040: 95.9621% ( 9) 00:08:01.584 9175.040 - 9225.452: 96.0069% ( 8) 00:08:01.584 9225.452 - 9275.865: 96.0629% ( 10) 00:08:01.584 9275.865 - 9326.277: 96.1134% ( 9) 00:08:01.584 9326.277 - 9376.689: 96.1358% ( 4) 00:08:01.584 9376.689 - 9427.102: 96.1526% ( 3) 00:08:01.584 9427.102 - 9477.514: 96.1694% ( 3) 00:08:01.584 9477.514 - 9527.926: 96.1918% ( 4) 00:08:01.584 9527.926 - 9578.338: 96.2646% ( 13) 00:08:01.584 9578.338 - 9628.751: 96.3430% ( 14) 00:08:01.584 9628.751 - 9679.163: 96.3766% ( 6) 00:08:01.584 9679.163 - 9729.575: 96.4382% ( 11) 00:08:01.584 9729.575 - 9779.988: 96.4662% ( 5) 00:08:01.584 9779.988 - 9830.400: 96.4830% ( 3) 00:08:01.584 9830.400 - 9880.812: 96.4998% ( 3) 00:08:01.584 9880.812 - 9931.225: 96.5166% ( 3) 00:08:01.584 9931.225 - 9981.637: 96.5278% ( 2) 00:08:01.584 9981.637 - 10032.049: 96.5670% ( 7) 00:08:01.584 10032.049 - 10082.462: 96.6342% ( 12) 00:08:01.584 10082.462 - 10132.874: 96.7238% ( 16) 00:08:01.584 10132.874 - 10183.286: 96.7854% ( 11) 00:08:01.584 10183.286 - 10233.698: 96.8302% ( 8) 00:08:01.584 10233.698 - 10284.111: 96.8750% ( 8) 00:08:01.584 10284.111 - 10334.523: 96.9310% ( 10) 00:08:01.584 10334.523 - 10384.935: 97.1438% ( 38) 00:08:01.584 10384.935 - 10435.348: 97.3062% ( 29) 00:08:01.584 10435.348 - 10485.760: 97.4518% ( 26) 00:08:01.584 10485.760 - 10536.172: 97.5078% ( 10) 00:08:01.584 10536.172 - 10586.585: 97.5694% ( 11) 00:08:01.584 10586.585 - 10636.997: 97.6366% ( 12) 00:08:01.584 10636.997 - 10687.409: 97.6983% ( 11) 00:08:01.584 10687.409 - 10737.822: 97.7711% ( 13) 00:08:01.584 10737.822 - 10788.234: 97.8047% ( 6) 00:08:01.584 10788.234 - 10838.646: 97.8215% ( 3) 00:08:01.584 10838.646 - 10889.058: 97.8327% ( 2) 00:08:01.584 10889.058 - 10939.471: 97.8439% ( 2) 00:08:01.584 10939.471 - 10989.883: 97.8495% ( 1) 00:08:01.584 11141.120 - 11191.532: 97.8719% ( 4) 00:08:01.584 11191.532 - 11241.945: 97.8887% ( 3) 00:08:01.584 11241.945 - 11292.357: 97.9055% ( 3) 00:08:01.584 11292.357 - 11342.769: 97.9279% ( 4) 00:08:01.584 11342.769 - 11393.182: 97.9503% ( 4) 00:08:01.584 11393.182 - 11443.594: 97.9727% ( 4) 00:08:01.584 11443.594 - 11494.006: 97.9895% ( 3) 00:08:01.584 11494.006 - 11544.418: 98.0119% ( 4) 00:08:01.584 11544.418 - 11594.831: 98.0343% ( 4) 00:08:01.584 11594.831 - 11645.243: 98.0567% ( 4) 00:08:01.584 11645.243 - 11695.655: 98.0791% ( 4) 00:08:01.584 11695.655 - 11746.068: 98.0959% ( 3) 00:08:01.584 11746.068 - 11796.480: 98.1127% ( 3) 00:08:01.584 11796.480 - 11846.892: 98.1351% ( 4) 00:08:01.584 11846.892 - 11897.305: 98.1519% ( 3) 00:08:01.584 11897.305 - 11947.717: 98.1743% ( 4) 00:08:01.584 11947.717 - 11998.129: 98.2023% ( 5) 00:08:01.584 11998.129 - 12048.542: 98.2415% ( 7) 00:08:01.584 12048.542 - 12098.954: 98.3087% ( 12) 00:08:01.584 12098.954 - 12149.366: 98.4767% ( 30) 00:08:01.584 12149.366 - 12199.778: 98.5103% ( 6) 00:08:01.584 12199.778 - 12250.191: 98.5327% ( 4) 00:08:01.584 12250.191 - 12300.603: 98.5495% ( 3) 00:08:01.584 12300.603 - 12351.015: 98.5663% ( 3) 00:08:01.584 12351.015 - 12401.428: 98.5719% ( 1) 00:08:01.584 12603.077 - 12653.489: 98.5775% ( 1) 00:08:01.584 12653.489 - 12703.902: 98.5887% ( 2) 00:08:01.584 12703.902 - 12754.314: 98.5999% ( 2) 00:08:01.584 12754.314 - 12804.726: 98.6167% ( 3) 00:08:01.584 12804.726 - 12855.138: 98.6223% ( 1) 00:08:01.584 12855.138 - 12905.551: 98.6391% ( 3) 00:08:01.584 12905.551 - 13006.375: 98.6671% ( 5) 00:08:01.584 13006.375 - 13107.200: 98.6951% ( 5) 00:08:01.584 13107.200 - 13208.025: 98.7175% ( 4) 00:08:01.584 13208.025 - 13308.849: 98.7679% ( 9) 00:08:01.584 13308.849 - 13409.674: 98.8071% ( 7) 00:08:01.584 13409.674 - 13510.498: 98.8687% ( 11) 00:08:01.584 13510.498 - 13611.323: 98.8967% ( 5) 00:08:01.584 13611.323 - 13712.148: 98.9191% ( 4) 00:08:01.584 13712.148 - 13812.972: 98.9247% ( 1) 00:08:01.584 15627.815 - 15728.640: 98.9639% ( 7) 00:08:01.584 15728.640 - 15829.465: 99.0143% ( 9) 00:08:01.584 15829.465 - 15930.289: 99.0647% ( 9) 00:08:01.584 15930.289 - 16031.114: 99.1095% ( 8) 00:08:01.584 16031.114 - 16131.938: 99.1431% ( 6) 00:08:01.584 16131.938 - 16232.763: 99.1711% ( 5) 00:08:01.584 16232.763 - 16333.588: 99.1991% ( 5) 00:08:01.584 16333.588 - 16434.412: 99.2832% ( 15) 00:08:01.584 16434.412 - 16535.237: 99.5240% ( 43) 00:08:01.584 16535.237 - 16636.062: 99.5856% ( 11) 00:08:01.584 16636.062 - 16736.886: 99.6360% ( 9) 00:08:01.584 16736.886 - 16837.711: 99.6416% ( 1) 00:08:01.584 19862.449 - 19963.274: 99.6472% ( 1) 00:08:01.584 19963.274 - 20064.098: 99.7032% ( 10) 00:08:01.584 20064.098 - 20164.923: 99.8376% ( 24) 00:08:01.584 20164.923 - 20265.748: 99.8768% ( 7) 00:08:01.584 20265.748 - 20366.572: 99.8824% ( 1) 00:08:01.584 20366.572 - 20467.397: 99.8936% ( 2) 00:08:01.584 20467.397 - 20568.222: 99.9104% ( 3) 00:08:01.584 20568.222 - 20669.046: 99.9328% ( 4) 00:08:01.584 20669.046 - 20769.871: 99.9720% ( 7) 00:08:01.584 20769.871 - 20870.695: 100.0000% ( 5) 00:08:01.584 00:08:01.584 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:01.584 ============================================================================== 00:08:01.584 Range in us Cumulative IO count 00:08:01.584 3503.655 - 3528.862: 0.0392% ( 7) 00:08:01.584 3528.862 - 3554.068: 0.1176% ( 14) 00:08:01.584 3554.068 - 3579.274: 0.1960% ( 14) 00:08:01.584 3579.274 - 3604.480: 0.2688% ( 13) 00:08:01.584 3604.480 - 3629.686: 0.2912% ( 4) 00:08:01.584 3629.686 - 3654.892: 0.3024% ( 2) 00:08:01.584 3654.892 - 3680.098: 0.3136% ( 2) 00:08:01.584 3680.098 - 3705.305: 0.3248% ( 2) 00:08:01.584 3705.305 - 3730.511: 0.3360% ( 2) 00:08:01.584 3730.511 - 3755.717: 0.3528% ( 3) 00:08:01.584 3755.717 - 3780.923: 0.3584% ( 1) 00:08:01.584 5066.437 - 5091.643: 0.3640% ( 1) 00:08:01.584 5268.086 - 5293.292: 0.3752% ( 2) 00:08:01.584 5293.292 - 5318.498: 0.4200% ( 8) 00:08:01.584 5318.498 - 5343.705: 0.4872% ( 12) 00:08:01.584 5343.705 - 5368.911: 0.4984% ( 2) 00:08:01.584 5368.911 - 5394.117: 0.5096% ( 2) 00:08:01.584 5394.117 - 5419.323: 0.5152% ( 1) 00:08:01.584 5419.323 - 5444.529: 0.5264% ( 2) 00:08:01.584 5444.529 - 5469.735: 0.5376% ( 2) 00:08:01.584 5469.735 - 5494.942: 0.5768% ( 7) 00:08:01.584 5494.942 - 5520.148: 0.6048% ( 5) 00:08:01.584 5520.148 - 5545.354: 0.6216% ( 3) 00:08:01.584 5545.354 - 5570.560: 0.6272% ( 1) 00:08:01.584 5570.560 - 5595.766: 0.6328% ( 1) 00:08:01.584 5595.766 - 5620.972: 0.6440% ( 2) 00:08:01.584 5620.972 - 5646.178: 0.6496% ( 1) 00:08:01.585 5646.178 - 5671.385: 0.6608% ( 2) 00:08:01.585 5671.385 - 5696.591: 0.6664% ( 1) 00:08:01.585 5696.591 - 5721.797: 0.6720% ( 1) 00:08:01.585 5721.797 - 5747.003: 0.6832% ( 2) 00:08:01.585 5747.003 - 5772.209: 0.6888% ( 1) 00:08:01.585 5772.209 - 5797.415: 0.7000% ( 2) 00:08:01.585 5797.415 - 5822.622: 0.7056% ( 1) 00:08:01.585 5822.622 - 5847.828: 0.7168% ( 2) 00:08:01.585 5873.034 - 5898.240: 0.7336% ( 3) 00:08:01.585 5898.240 - 5923.446: 0.7504% ( 3) 00:08:01.585 5923.446 - 5948.652: 0.7672% ( 3) 00:08:01.585 5948.652 - 5973.858: 0.7784% ( 2) 00:08:01.585 5973.858 - 5999.065: 0.7953% ( 3) 00:08:01.585 5999.065 - 6024.271: 0.8233% ( 5) 00:08:01.585 6024.271 - 6049.477: 0.8793% ( 10) 00:08:01.585 6049.477 - 6074.683: 0.9689% ( 16) 00:08:01.585 6074.683 - 6099.889: 1.1033% ( 24) 00:08:01.585 6099.889 - 6125.095: 1.3217% ( 39) 00:08:01.585 6125.095 - 6150.302: 1.4729% ( 27) 00:08:01.585 6150.302 - 6175.508: 1.6577% ( 33) 00:08:01.585 6175.508 - 6200.714: 2.0161% ( 64) 00:08:01.585 6200.714 - 6225.920: 2.4082% ( 70) 00:08:01.585 6225.920 - 6251.126: 2.8058% ( 71) 00:08:01.585 6251.126 - 6276.332: 3.1754% ( 66) 00:08:01.585 6276.332 - 6301.538: 3.5226% ( 62) 00:08:01.585 6301.538 - 6326.745: 4.1835% ( 118) 00:08:01.585 6326.745 - 6351.951: 4.9843% ( 143) 00:08:01.585 6351.951 - 6377.157: 5.9308% ( 169) 00:08:01.585 6377.157 - 6402.363: 6.9556% ( 183) 00:08:01.585 6402.363 - 6427.569: 7.8629% ( 162) 00:08:01.585 6427.569 - 6452.775: 9.3302% ( 262) 00:08:01.585 6452.775 - 6503.188: 12.8584% ( 630) 00:08:01.585 6503.188 - 6553.600: 16.9131% ( 724) 00:08:01.585 6553.600 - 6604.012: 21.6118% ( 839) 00:08:01.585 6604.012 - 6654.425: 26.6577% ( 901) 00:08:01.585 6654.425 - 6704.837: 32.8741% ( 1110) 00:08:01.585 6704.837 - 6755.249: 38.8497% ( 1067) 00:08:01.585 6755.249 - 6805.662: 44.6965% ( 1044) 00:08:01.585 6805.662 - 6856.074: 50.8457% ( 1098) 00:08:01.585 6856.074 - 6906.486: 56.2836% ( 971) 00:08:01.585 6906.486 - 6956.898: 61.1839% ( 875) 00:08:01.585 6956.898 - 7007.311: 64.7513% ( 637) 00:08:01.585 7007.311 - 7057.723: 68.6772% ( 701) 00:08:01.585 7057.723 - 7108.135: 72.2110% ( 631) 00:08:01.585 7108.135 - 7158.548: 75.3864% ( 567) 00:08:01.585 7158.548 - 7208.960: 78.1026% ( 485) 00:08:01.585 7208.960 - 7259.372: 80.6228% ( 450) 00:08:01.585 7259.372 - 7309.785: 82.2917% ( 298) 00:08:01.585 7309.785 - 7360.197: 83.5181% ( 219) 00:08:01.585 7360.197 - 7410.609: 84.7446% ( 219) 00:08:01.585 7410.609 - 7461.022: 85.5903% ( 151) 00:08:01.585 7461.022 - 7511.434: 86.5031% ( 163) 00:08:01.585 7511.434 - 7561.846: 87.1864% ( 122) 00:08:01.585 7561.846 - 7612.258: 87.8696% ( 122) 00:08:01.585 7612.258 - 7662.671: 88.3289% ( 82) 00:08:01.585 7662.671 - 7713.083: 89.0457% ( 128) 00:08:01.585 7713.083 - 7763.495: 89.6281% ( 104) 00:08:01.585 7763.495 - 7813.908: 90.2330% ( 108) 00:08:01.585 7813.908 - 7864.320: 90.6754% ( 79) 00:08:01.585 7864.320 - 7914.732: 91.1626% ( 87) 00:08:01.585 7914.732 - 7965.145: 91.7115% ( 98) 00:08:01.585 7965.145 - 8015.557: 92.4731% ( 136) 00:08:01.585 8015.557 - 8065.969: 92.9267% ( 81) 00:08:01.585 8065.969 - 8116.382: 93.4140% ( 87) 00:08:01.585 8116.382 - 8166.794: 93.7388% ( 58) 00:08:01.585 8166.794 - 8217.206: 94.1028% ( 65) 00:08:01.585 8217.206 - 8267.618: 94.4556% ( 63) 00:08:01.585 8267.618 - 8318.031: 94.6965% ( 43) 00:08:01.585 8318.031 - 8368.443: 94.7805% ( 15) 00:08:01.585 8368.443 - 8418.855: 94.8533% ( 13) 00:08:01.585 8418.855 - 8469.268: 94.9093% ( 10) 00:08:01.585 8469.268 - 8519.680: 94.9429% ( 6) 00:08:01.585 8519.680 - 8570.092: 95.0045% ( 11) 00:08:01.585 8570.092 - 8620.505: 95.1165% ( 20) 00:08:01.585 8620.505 - 8670.917: 95.2845% ( 30) 00:08:01.585 8670.917 - 8721.329: 95.4469% ( 29) 00:08:01.585 8721.329 - 8771.742: 95.5477% ( 18) 00:08:01.585 8771.742 - 8822.154: 95.6037% ( 10) 00:08:01.585 8822.154 - 8872.566: 95.6485% ( 8) 00:08:01.585 8872.566 - 8922.978: 95.7101% ( 11) 00:08:01.585 8922.978 - 8973.391: 95.9117% ( 36) 00:08:01.585 8973.391 - 9023.803: 95.9789% ( 12) 00:08:01.585 9023.803 - 9074.215: 96.0461% ( 12) 00:08:01.585 9074.215 - 9124.628: 96.0853% ( 7) 00:08:01.585 9124.628 - 9175.040: 96.1246% ( 7) 00:08:01.585 9175.040 - 9225.452: 96.1638% ( 7) 00:08:01.585 9225.452 - 9275.865: 96.2198% ( 10) 00:08:01.585 9275.865 - 9326.277: 96.2702% ( 9) 00:08:01.585 9326.277 - 9376.689: 96.2870% ( 3) 00:08:01.585 9376.689 - 9427.102: 96.2982% ( 2) 00:08:01.585 9427.102 - 9477.514: 96.3150% ( 3) 00:08:01.585 9477.514 - 9527.926: 96.3430% ( 5) 00:08:01.585 9527.926 - 9578.338: 96.3710% ( 5) 00:08:01.585 9578.338 - 9628.751: 96.3990% ( 5) 00:08:01.585 9628.751 - 9679.163: 96.4158% ( 3) 00:08:01.585 9679.163 - 9729.575: 96.4438% ( 5) 00:08:01.585 9729.575 - 9779.988: 96.4830% ( 7) 00:08:01.585 9779.988 - 9830.400: 96.5390% ( 10) 00:08:01.585 9830.400 - 9880.812: 96.6174% ( 14) 00:08:01.585 9880.812 - 9931.225: 96.6902% ( 13) 00:08:01.585 9931.225 - 9981.637: 96.7462% ( 10) 00:08:01.585 9981.637 - 10032.049: 96.7742% ( 5) 00:08:01.585 10032.049 - 10082.462: 96.8022% ( 5) 00:08:01.585 10082.462 - 10132.874: 96.8190% ( 3) 00:08:01.585 10132.874 - 10183.286: 96.8414% ( 4) 00:08:01.585 10183.286 - 10233.698: 96.8582% ( 3) 00:08:01.585 10233.698 - 10284.111: 96.9086% ( 9) 00:08:01.585 10284.111 - 10334.523: 96.9646% ( 10) 00:08:01.585 10334.523 - 10384.935: 97.0206% ( 10) 00:08:01.585 10384.935 - 10435.348: 97.0598% ( 7) 00:08:01.585 10435.348 - 10485.760: 97.0710% ( 2) 00:08:01.585 10485.760 - 10536.172: 97.0766% ( 1) 00:08:01.585 10536.172 - 10586.585: 97.0878% ( 2) 00:08:01.585 10586.585 - 10636.997: 97.0934% ( 1) 00:08:01.585 10636.997 - 10687.409: 97.1102% ( 3) 00:08:01.585 10687.409 - 10737.822: 97.1438% ( 6) 00:08:01.585 10737.822 - 10788.234: 97.1886% ( 8) 00:08:01.585 10788.234 - 10838.646: 97.2446% ( 10) 00:08:01.585 10838.646 - 10889.058: 97.3006% ( 10) 00:08:01.585 10889.058 - 10939.471: 97.3734% ( 13) 00:08:01.585 10939.471 - 10989.883: 97.5358% ( 29) 00:08:01.585 10989.883 - 11040.295: 97.6086% ( 13) 00:08:01.585 11040.295 - 11090.708: 97.6647% ( 10) 00:08:01.585 11090.708 - 11141.120: 97.7095% ( 8) 00:08:01.585 11141.120 - 11191.532: 97.7655% ( 10) 00:08:01.585 11191.532 - 11241.945: 97.8159% ( 9) 00:08:01.585 11241.945 - 11292.357: 98.0175% ( 36) 00:08:01.585 11292.357 - 11342.769: 98.0791% ( 11) 00:08:01.585 11342.769 - 11393.182: 98.1071% ( 5) 00:08:01.585 11393.182 - 11443.594: 98.1463% ( 7) 00:08:01.585 11443.594 - 11494.006: 98.1743% ( 5) 00:08:01.585 11494.006 - 11544.418: 98.1911% ( 3) 00:08:01.585 11544.418 - 11594.831: 98.2079% ( 3) 00:08:01.585 11594.831 - 11645.243: 98.2135% ( 1) 00:08:01.585 11796.480 - 11846.892: 98.2191% ( 1) 00:08:01.585 11846.892 - 11897.305: 98.2247% ( 1) 00:08:01.585 11897.305 - 11947.717: 98.2471% ( 4) 00:08:01.585 11947.717 - 11998.129: 98.2863% ( 7) 00:08:01.585 11998.129 - 12048.542: 98.3087% ( 4) 00:08:01.585 12048.542 - 12098.954: 98.5103% ( 36) 00:08:01.585 12098.954 - 12149.366: 98.5271% ( 3) 00:08:01.585 12149.366 - 12199.778: 98.5383% ( 2) 00:08:01.586 12199.778 - 12250.191: 98.5551% ( 3) 00:08:01.586 12250.191 - 12300.603: 98.5663% ( 2) 00:08:01.586 13409.674 - 13510.498: 98.5831% ( 3) 00:08:01.586 13510.498 - 13611.323: 98.6055% ( 4) 00:08:01.586 13611.323 - 13712.148: 98.6335% ( 5) 00:08:01.586 13712.148 - 13812.972: 98.6559% ( 4) 00:08:01.586 13812.972 - 13913.797: 98.6839% ( 5) 00:08:01.586 13913.797 - 14014.622: 98.7063% ( 4) 00:08:01.586 14014.622 - 14115.446: 98.7567% ( 9) 00:08:01.586 14115.446 - 14216.271: 98.8127% ( 10) 00:08:01.586 14216.271 - 14317.095: 98.8743% ( 11) 00:08:01.586 14317.095 - 14417.920: 98.9135% ( 7) 00:08:01.586 14417.920 - 14518.745: 98.9247% ( 2) 00:08:01.586 15123.692 - 15224.517: 98.9303% ( 1) 00:08:01.586 15325.342 - 15426.166: 98.9695% ( 7) 00:08:01.586 15426.166 - 15526.991: 98.9975% ( 5) 00:08:01.586 15526.991 - 15627.815: 99.0647% ( 12) 00:08:01.586 15627.815 - 15728.640: 99.1543% ( 16) 00:08:01.586 15728.640 - 15829.465: 99.2608% ( 19) 00:08:01.586 15829.465 - 15930.289: 99.3560% ( 17) 00:08:01.586 15930.289 - 16031.114: 99.4008% ( 8) 00:08:01.586 16031.114 - 16131.938: 99.4400% ( 7) 00:08:01.586 16131.938 - 16232.763: 99.4904% ( 9) 00:08:01.586 16232.763 - 16333.588: 99.5352% ( 8) 00:08:01.586 16333.588 - 16434.412: 99.5800% ( 8) 00:08:01.586 16434.412 - 16535.237: 99.6248% ( 8) 00:08:01.586 16535.237 - 16636.062: 99.6416% ( 3) 00:08:01.586 19257.502 - 19358.326: 99.6472% ( 1) 00:08:01.586 19358.326 - 19459.151: 99.6752% ( 5) 00:08:01.586 19459.151 - 19559.975: 99.7088% ( 6) 00:08:01.586 19559.975 - 19660.800: 99.7424% ( 6) 00:08:01.586 19660.800 - 19761.625: 99.8880% ( 26) 00:08:01.586 19761.625 - 19862.449: 99.9216% ( 6) 00:08:01.586 19862.449 - 19963.274: 99.9552% ( 6) 00:08:01.586 19963.274 - 20064.098: 99.9888% ( 6) 00:08:01.586 20064.098 - 20164.923: 100.0000% ( 2) 00:08:01.586 00:08:01.586 00:53:24 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:01.586 00:08:01.586 real 0m2.460s 00:08:01.586 user 0m2.178s 00:08:01.586 sys 0m0.176s 00:08:01.586 ************************************ 00:08:01.586 END TEST nvme_perf 00:08:01.586 ************************************ 00:08:01.586 00:53:24 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.586 00:53:24 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:01.586 00:53:24 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:01.586 00:53:24 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:01.586 00:53:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.586 00:53:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.586 ************************************ 00:08:01.586 START TEST nvme_hello_world 00:08:01.586 ************************************ 00:08:01.586 00:53:24 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:01.845 Initializing NVMe Controllers 00:08:01.845 Attached to 0000:00:10.0 00:08:01.845 Namespace ID: 1 size: 6GB 00:08:01.845 Attached to 0000:00:11.0 00:08:01.845 Namespace ID: 1 size: 5GB 00:08:01.845 Attached to 0000:00:13.0 00:08:01.845 Namespace ID: 1 size: 1GB 00:08:01.845 Attached to 0000:00:12.0 00:08:01.845 Namespace ID: 1 size: 4GB 00:08:01.845 Namespace ID: 2 size: 4GB 00:08:01.845 Namespace ID: 3 size: 4GB 00:08:01.845 Initialization complete. 00:08:01.845 INFO: using host memory buffer for IO 00:08:01.845 Hello world! 00:08:01.845 INFO: using host memory buffer for IO 00:08:01.845 Hello world! 00:08:01.845 INFO: using host memory buffer for IO 00:08:01.845 Hello world! 00:08:01.845 INFO: using host memory buffer for IO 00:08:01.845 Hello world! 00:08:01.845 INFO: using host memory buffer for IO 00:08:01.845 Hello world! 00:08:01.845 INFO: using host memory buffer for IO 00:08:01.845 Hello world! 00:08:01.845 00:08:01.845 real 0m0.198s 00:08:01.845 user 0m0.074s 00:08:01.845 sys 0m0.083s 00:08:01.845 00:53:24 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.845 00:53:24 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:01.845 ************************************ 00:08:01.845 END TEST nvme_hello_world 00:08:01.845 ************************************ 00:08:01.845 00:53:24 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:01.845 00:53:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.845 00:53:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.845 00:53:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:01.845 ************************************ 00:08:01.845 START TEST nvme_sgl 00:08:01.845 ************************************ 00:08:01.845 00:53:24 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:02.103 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:02.103 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:02.103 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:02.103 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:02.103 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:02.103 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:02.103 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:02.103 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:02.103 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:02.103 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:02.103 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:02.103 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:02.103 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:02.103 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:02.103 NVMe Readv/Writev Request test 00:08:02.103 Attached to 0000:00:10.0 00:08:02.103 Attached to 0000:00:11.0 00:08:02.103 Attached to 0000:00:13.0 00:08:02.103 Attached to 0000:00:12.0 00:08:02.103 0000:00:10.0: build_io_request_2 test passed 00:08:02.103 0000:00:10.0: build_io_request_4 test passed 00:08:02.103 0000:00:10.0: build_io_request_5 test passed 00:08:02.103 0000:00:10.0: build_io_request_6 test passed 00:08:02.103 0000:00:10.0: build_io_request_7 test passed 00:08:02.103 0000:00:10.0: build_io_request_10 test passed 00:08:02.103 0000:00:11.0: build_io_request_2 test passed 00:08:02.103 0000:00:11.0: build_io_request_4 test passed 00:08:02.103 0000:00:11.0: build_io_request_5 test passed 00:08:02.103 0000:00:11.0: build_io_request_6 test passed 00:08:02.103 0000:00:11.0: build_io_request_7 test passed 00:08:02.103 0000:00:11.0: build_io_request_10 test passed 00:08:02.103 Cleaning up... 00:08:02.103 00:08:02.103 real 0m0.260s 00:08:02.103 user 0m0.119s 00:08:02.103 sys 0m0.096s 00:08:02.103 00:53:24 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.103 ************************************ 00:08:02.103 END TEST nvme_sgl 00:08:02.103 ************************************ 00:08:02.103 00:53:24 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:02.103 00:53:25 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:02.103 00:53:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.103 00:53:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.103 00:53:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.103 ************************************ 00:08:02.103 START TEST nvme_e2edp 00:08:02.103 ************************************ 00:08:02.103 00:53:25 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:02.362 NVMe Write/Read with End-to-End data protection test 00:08:02.362 Attached to 0000:00:10.0 00:08:02.362 Attached to 0000:00:11.0 00:08:02.362 Attached to 0000:00:13.0 00:08:02.362 Attached to 0000:00:12.0 00:08:02.362 Cleaning up... 00:08:02.362 00:08:02.362 real 0m0.197s 00:08:02.362 user 0m0.061s 00:08:02.363 sys 0m0.097s 00:08:02.363 00:53:25 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.363 ************************************ 00:08:02.363 00:53:25 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:02.363 END TEST nvme_e2edp 00:08:02.363 ************************************ 00:08:02.363 00:53:25 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:02.363 00:53:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.363 00:53:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.363 00:53:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.363 ************************************ 00:08:02.363 START TEST nvme_reserve 00:08:02.363 ************************************ 00:08:02.363 00:53:25 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:02.621 ===================================================== 00:08:02.621 NVMe Controller at PCI bus 0, device 16, function 0 00:08:02.621 ===================================================== 00:08:02.621 Reservations: Not Supported 00:08:02.621 ===================================================== 00:08:02.621 NVMe Controller at PCI bus 0, device 17, function 0 00:08:02.621 ===================================================== 00:08:02.621 Reservations: Not Supported 00:08:02.621 ===================================================== 00:08:02.621 NVMe Controller at PCI bus 0, device 19, function 0 00:08:02.621 ===================================================== 00:08:02.621 Reservations: Not Supported 00:08:02.621 ===================================================== 00:08:02.621 NVMe Controller at PCI bus 0, device 18, function 0 00:08:02.621 ===================================================== 00:08:02.621 Reservations: Not Supported 00:08:02.621 Reservation test passed 00:08:02.621 00:08:02.621 real 0m0.203s 00:08:02.621 user 0m0.071s 00:08:02.621 sys 0m0.083s 00:08:02.621 00:53:25 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.621 ************************************ 00:08:02.621 END TEST nvme_reserve 00:08:02.621 ************************************ 00:08:02.621 00:53:25 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:02.621 00:53:25 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:02.621 00:53:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.621 00:53:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.621 00:53:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.621 ************************************ 00:08:02.621 START TEST nvme_err_injection 00:08:02.621 ************************************ 00:08:02.621 00:53:25 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:02.880 NVMe Error Injection test 00:08:02.880 Attached to 0000:00:10.0 00:08:02.880 Attached to 0000:00:11.0 00:08:02.880 Attached to 0000:00:13.0 00:08:02.880 Attached to 0000:00:12.0 00:08:02.880 0000:00:10.0: get features failed as expected 00:08:02.880 0000:00:11.0: get features failed as expected 00:08:02.880 0000:00:13.0: get features failed as expected 00:08:02.880 0000:00:12.0: get features failed as expected 00:08:02.880 0000:00:10.0: get features successfully as expected 00:08:02.880 0000:00:11.0: get features successfully as expected 00:08:02.880 0000:00:13.0: get features successfully as expected 00:08:02.880 0000:00:12.0: get features successfully as expected 00:08:02.880 0000:00:10.0: read failed as expected 00:08:02.880 0000:00:11.0: read failed as expected 00:08:02.880 0000:00:13.0: read failed as expected 00:08:02.880 0000:00:12.0: read failed as expected 00:08:02.880 0000:00:10.0: read successfully as expected 00:08:02.880 0000:00:11.0: read successfully as expected 00:08:02.880 0000:00:13.0: read successfully as expected 00:08:02.880 0000:00:12.0: read successfully as expected 00:08:02.880 Cleaning up... 00:08:02.880 00:08:02.880 real 0m0.206s 00:08:02.880 user 0m0.077s 00:08:02.880 sys 0m0.087s 00:08:02.880 00:53:25 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.880 ************************************ 00:08:02.880 END TEST nvme_err_injection 00:08:02.880 ************************************ 00:08:02.880 00:53:25 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:02.880 00:53:25 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:02.880 00:53:25 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:02.880 00:53:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.880 00:53:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.880 ************************************ 00:08:02.880 START TEST nvme_overhead 00:08:02.880 ************************************ 00:08:02.880 00:53:25 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:04.253 Initializing NVMe Controllers 00:08:04.253 Attached to 0000:00:10.0 00:08:04.253 Attached to 0000:00:11.0 00:08:04.253 Attached to 0000:00:13.0 00:08:04.253 Attached to 0000:00:12.0 00:08:04.254 Initialization complete. Launching workers. 00:08:04.254 submit (in ns) avg, min, max = 11396.7, 9679.2, 178595.4 00:08:04.254 complete (in ns) avg, min, max = 7658.5, 7238.5, 215872.3 00:08:04.254 00:08:04.254 Submit histogram 00:08:04.254 ================ 00:08:04.254 Range in us Cumulative Count 00:08:04.254 9.649 - 9.698: 0.0054% ( 1) 00:08:04.254 9.698 - 9.748: 0.0109% ( 1) 00:08:04.254 9.797 - 9.846: 0.0163% ( 1) 00:08:04.254 9.994 - 10.043: 0.0217% ( 1) 00:08:04.254 10.043 - 10.092: 0.0272% ( 1) 00:08:04.254 10.092 - 10.142: 0.0326% ( 1) 00:08:04.254 10.240 - 10.289: 0.0380% ( 1) 00:08:04.254 10.585 - 10.634: 0.0435% ( 1) 00:08:04.254 10.782 - 10.831: 0.0544% ( 2) 00:08:04.254 10.831 - 10.880: 0.2229% ( 31) 00:08:04.254 10.880 - 10.929: 1.6904% ( 270) 00:08:04.254 10.929 - 10.978: 7.8161% ( 1127) 00:08:04.254 10.978 - 11.028: 20.6381% ( 2359) 00:08:04.254 11.028 - 11.077: 38.5096% ( 3288) 00:08:04.254 11.077 - 11.126: 57.3214% ( 3461) 00:08:04.254 11.126 - 11.175: 71.3610% ( 2583) 00:08:04.254 11.175 - 11.225: 80.0631% ( 1601) 00:08:04.254 11.225 - 11.274: 85.2158% ( 948) 00:08:04.254 11.274 - 11.323: 87.7541% ( 467) 00:08:04.254 11.323 - 11.372: 89.1401% ( 255) 00:08:04.254 11.372 - 11.422: 90.0370% ( 165) 00:08:04.254 11.422 - 11.471: 90.7273% ( 127) 00:08:04.254 11.471 - 11.520: 91.2871% ( 103) 00:08:04.254 11.520 - 11.569: 91.7926% ( 93) 00:08:04.254 11.569 - 11.618: 92.2763% ( 89) 00:08:04.254 11.618 - 11.668: 92.7818% ( 93) 00:08:04.254 11.668 - 11.717: 93.1895% ( 75) 00:08:04.254 11.717 - 11.766: 93.5102% ( 59) 00:08:04.254 11.766 - 11.815: 93.7928% ( 52) 00:08:04.254 11.815 - 11.865: 94.0265% ( 43) 00:08:04.254 11.865 - 11.914: 94.2113% ( 34) 00:08:04.254 11.914 - 11.963: 94.4070% ( 36) 00:08:04.254 11.963 - 12.012: 94.5429% ( 25) 00:08:04.254 12.012 - 12.062: 94.6570% ( 21) 00:08:04.254 12.062 - 12.111: 94.7494% ( 17) 00:08:04.254 12.111 - 12.160: 94.8310% ( 15) 00:08:04.254 12.160 - 12.209: 94.9397% ( 20) 00:08:04.254 12.209 - 12.258: 94.9723% ( 6) 00:08:04.254 12.258 - 12.308: 95.0429% ( 13) 00:08:04.254 12.308 - 12.357: 95.1190% ( 14) 00:08:04.254 12.357 - 12.406: 95.1516% ( 6) 00:08:04.254 12.406 - 12.455: 95.2169% ( 12) 00:08:04.254 12.455 - 12.505: 95.2549% ( 7) 00:08:04.254 12.505 - 12.554: 95.2821% ( 5) 00:08:04.254 12.554 - 12.603: 95.2984% ( 3) 00:08:04.254 12.603 - 12.702: 95.3256% ( 5) 00:08:04.254 12.702 - 12.800: 95.3745% ( 9) 00:08:04.254 12.800 - 12.898: 95.4180% ( 8) 00:08:04.254 12.898 - 12.997: 95.5049% ( 16) 00:08:04.254 12.997 - 13.095: 95.6952% ( 35) 00:08:04.254 13.095 - 13.194: 96.1083% ( 76) 00:08:04.254 13.194 - 13.292: 96.6627% ( 102) 00:08:04.254 13.292 - 13.391: 96.9671% ( 56) 00:08:04.254 13.391 - 13.489: 97.1845% ( 40) 00:08:04.254 13.489 - 13.588: 97.3258% ( 26) 00:08:04.254 13.588 - 13.686: 97.4508% ( 23) 00:08:04.254 13.686 - 13.785: 97.5323% ( 15) 00:08:04.254 13.785 - 13.883: 97.5976% ( 12) 00:08:04.254 13.883 - 13.982: 97.6574% ( 11) 00:08:04.254 13.982 - 14.080: 97.6954% ( 7) 00:08:04.254 14.080 - 14.178: 97.7498% ( 10) 00:08:04.254 14.178 - 14.277: 97.7661% ( 3) 00:08:04.254 14.277 - 14.375: 97.7987% ( 6) 00:08:04.254 14.375 - 14.474: 97.8259% ( 5) 00:08:04.254 14.474 - 14.572: 97.8476% ( 4) 00:08:04.254 14.572 - 14.671: 97.8639% ( 3) 00:08:04.254 14.671 - 14.769: 97.8911% ( 5) 00:08:04.254 14.769 - 14.868: 97.9019% ( 2) 00:08:04.254 14.868 - 14.966: 97.9563% ( 10) 00:08:04.254 14.966 - 15.065: 97.9943% ( 7) 00:08:04.254 15.065 - 15.163: 98.0324% ( 7) 00:08:04.254 15.163 - 15.262: 98.0704% ( 7) 00:08:04.254 15.262 - 15.360: 98.0867% ( 3) 00:08:04.254 15.360 - 15.458: 98.1085% ( 4) 00:08:04.254 15.458 - 15.557: 98.1465% ( 7) 00:08:04.254 15.557 - 15.655: 98.1628% ( 3) 00:08:04.254 15.655 - 15.754: 98.1846% ( 4) 00:08:04.254 15.754 - 15.852: 98.2226% ( 7) 00:08:04.254 15.852 - 15.951: 98.2444% ( 4) 00:08:04.254 15.951 - 16.049: 98.2770% ( 6) 00:08:04.254 16.049 - 16.148: 98.2933% ( 3) 00:08:04.254 16.148 - 16.246: 98.2987% ( 1) 00:08:04.254 16.246 - 16.345: 98.3150% ( 3) 00:08:04.254 16.345 - 16.443: 98.3313% ( 3) 00:08:04.254 16.443 - 16.542: 98.3640% ( 6) 00:08:04.254 16.542 - 16.640: 98.4455% ( 15) 00:08:04.254 16.640 - 16.738: 98.5705% ( 23) 00:08:04.254 16.738 - 16.837: 98.6792% ( 20) 00:08:04.254 16.837 - 16.935: 98.7716% ( 17) 00:08:04.254 16.935 - 17.034: 98.7988% ( 5) 00:08:04.254 17.034 - 17.132: 98.8749% ( 14) 00:08:04.254 17.132 - 17.231: 98.9238% ( 9) 00:08:04.254 17.231 - 17.329: 99.0379% ( 21) 00:08:04.254 17.329 - 17.428: 99.0760% ( 7) 00:08:04.254 17.428 - 17.526: 99.1032% ( 5) 00:08:04.254 17.526 - 17.625: 99.1412% ( 7) 00:08:04.254 17.625 - 17.723: 99.1738% ( 6) 00:08:04.254 17.723 - 17.822: 99.2227% ( 9) 00:08:04.254 17.822 - 17.920: 99.2445% ( 4) 00:08:04.254 17.920 - 18.018: 99.2771% ( 6) 00:08:04.254 18.018 - 18.117: 99.3097% ( 6) 00:08:04.254 18.117 - 18.215: 99.3260% ( 3) 00:08:04.254 18.215 - 18.314: 99.3369% ( 2) 00:08:04.254 18.314 - 18.412: 99.3478% ( 2) 00:08:04.254 18.412 - 18.511: 99.3532% ( 1) 00:08:04.254 18.511 - 18.609: 99.3749% ( 4) 00:08:04.254 18.609 - 18.708: 99.3858% ( 2) 00:08:04.254 18.708 - 18.806: 99.4021% ( 3) 00:08:04.254 18.806 - 18.905: 99.4184% ( 3) 00:08:04.254 18.905 - 19.003: 99.4239% ( 1) 00:08:04.254 19.003 - 19.102: 99.4347% ( 2) 00:08:04.254 19.102 - 19.200: 99.4402% ( 1) 00:08:04.254 19.298 - 19.397: 99.4456% ( 1) 00:08:04.254 19.594 - 19.692: 99.4510% ( 1) 00:08:04.254 19.889 - 19.988: 99.4619% ( 2) 00:08:04.254 20.185 - 20.283: 99.4673% ( 1) 00:08:04.254 20.775 - 20.874: 99.4728% ( 1) 00:08:04.254 20.874 - 20.972: 99.4782% ( 1) 00:08:04.254 20.972 - 21.071: 99.4836% ( 1) 00:08:04.254 21.169 - 21.268: 99.4891% ( 1) 00:08:04.254 21.268 - 21.366: 99.4945% ( 1) 00:08:04.254 21.366 - 21.465: 99.4999% ( 1) 00:08:04.254 21.662 - 21.760: 99.5054% ( 1) 00:08:04.254 21.760 - 21.858: 99.5108% ( 1) 00:08:04.254 22.055 - 22.154: 99.5217% ( 2) 00:08:04.254 22.843 - 22.942: 99.5271% ( 1) 00:08:04.255 23.237 - 23.335: 99.5326% ( 1) 00:08:04.255 23.926 - 24.025: 99.5380% ( 1) 00:08:04.255 24.025 - 24.123: 99.5434% ( 1) 00:08:04.255 24.222 - 24.320: 99.5489% ( 1) 00:08:04.255 27.569 - 27.766: 99.5652% ( 3) 00:08:04.255 27.766 - 27.963: 99.6576% ( 17) 00:08:04.255 27.963 - 28.160: 99.7771% ( 22) 00:08:04.255 28.160 - 28.357: 99.8424% ( 12) 00:08:04.255 28.357 - 28.554: 99.8696% ( 5) 00:08:04.255 28.554 - 28.751: 99.8859% ( 3) 00:08:04.255 28.948 - 29.145: 99.8913% ( 1) 00:08:04.255 30.129 - 30.326: 99.8967% ( 1) 00:08:04.255 30.523 - 30.720: 99.9022% ( 1) 00:08:04.255 30.720 - 30.917: 99.9076% ( 1) 00:08:04.255 31.705 - 31.902: 99.9130% ( 1) 00:08:04.255 31.902 - 32.098: 99.9185% ( 1) 00:08:04.255 32.295 - 32.492: 99.9239% ( 1) 00:08:04.255 32.492 - 32.689: 99.9293% ( 1) 00:08:04.255 39.385 - 39.582: 99.9348% ( 1) 00:08:04.255 40.369 - 40.566: 99.9402% ( 1) 00:08:04.255 43.323 - 43.520: 99.9511% ( 2) 00:08:04.255 47.262 - 47.458: 99.9565% ( 1) 00:08:04.255 47.655 - 47.852: 99.9620% ( 1) 00:08:04.255 49.034 - 49.231: 99.9674% ( 1) 00:08:04.255 53.957 - 54.351: 99.9728% ( 1) 00:08:04.255 55.138 - 55.532: 99.9783% ( 1) 00:08:04.255 58.683 - 59.077: 99.9837% ( 1) 00:08:04.255 76.406 - 76.800: 99.9891% ( 1) 00:08:04.255 108.702 - 109.489: 99.9946% ( 1) 00:08:04.255 178.018 - 178.806: 100.0000% ( 1) 00:08:04.255 00:08:04.255 Complete histogram 00:08:04.255 ================== 00:08:04.255 Range in us Cumulative Count 00:08:04.255 7.237 - 7.286: 0.2500% ( 46) 00:08:04.255 7.286 - 7.335: 3.9135% ( 674) 00:08:04.255 7.335 - 7.385: 19.4695% ( 2862) 00:08:04.255 7.385 - 7.434: 45.7985% ( 4844) 00:08:04.255 7.434 - 7.483: 69.6978% ( 4397) 00:08:04.255 7.483 - 7.532: 85.0038% ( 2816) 00:08:04.255 7.532 - 7.582: 91.5426% ( 1203) 00:08:04.255 7.582 - 7.631: 93.9939% ( 451) 00:08:04.255 7.631 - 7.680: 94.8364% ( 155) 00:08:04.255 7.680 - 7.729: 95.1190% ( 52) 00:08:04.255 7.729 - 7.778: 95.2386% ( 22) 00:08:04.255 7.778 - 7.828: 95.3582% ( 22) 00:08:04.255 7.828 - 7.877: 95.4180% ( 11) 00:08:04.255 7.877 - 7.926: 95.4669% ( 9) 00:08:04.255 7.926 - 7.975: 95.5376% ( 13) 00:08:04.255 7.975 - 8.025: 95.5702% ( 6) 00:08:04.255 8.025 - 8.074: 95.6463% ( 14) 00:08:04.255 8.074 - 8.123: 95.7495% ( 19) 00:08:04.255 8.123 - 8.172: 95.8963% ( 27) 00:08:04.255 8.172 - 8.222: 95.9724% ( 14) 00:08:04.255 8.222 - 8.271: 96.0811% ( 20) 00:08:04.255 8.271 - 8.320: 96.2007% ( 22) 00:08:04.255 8.320 - 8.369: 96.2985% ( 18) 00:08:04.255 8.369 - 8.418: 96.3637% ( 12) 00:08:04.255 8.418 - 8.468: 96.4453% ( 15) 00:08:04.255 8.468 - 8.517: 96.4779% ( 6) 00:08:04.255 8.517 - 8.566: 96.4996% ( 4) 00:08:04.255 8.566 - 8.615: 96.5051% ( 1) 00:08:04.255 8.615 - 8.665: 96.5105% ( 1) 00:08:04.255 8.665 - 8.714: 96.5322% ( 4) 00:08:04.255 8.714 - 8.763: 96.5377% ( 1) 00:08:04.255 8.812 - 8.862: 96.5485% ( 2) 00:08:04.255 8.862 - 8.911: 96.5540% ( 1) 00:08:04.255 8.911 - 8.960: 96.5975% ( 8) 00:08:04.255 8.960 - 9.009: 96.8584% ( 48) 00:08:04.255 9.009 - 9.058: 97.2171% ( 66) 00:08:04.255 9.058 - 9.108: 97.4019% ( 34) 00:08:04.255 9.108 - 9.157: 97.5867% ( 34) 00:08:04.255 9.157 - 9.206: 97.6084% ( 4) 00:08:04.255 9.206 - 9.255: 97.6410% ( 6) 00:08:04.255 9.255 - 9.305: 97.6628% ( 4) 00:08:04.255 9.305 - 9.354: 97.6737% ( 2) 00:08:04.255 9.452 - 9.502: 97.6900% ( 3) 00:08:04.255 9.502 - 9.551: 97.7008% ( 2) 00:08:04.255 9.551 - 9.600: 97.7063% ( 1) 00:08:04.255 9.649 - 9.698: 97.7226% ( 3) 00:08:04.255 9.698 - 9.748: 97.7334% ( 2) 00:08:04.255 9.748 - 9.797: 97.7389% ( 1) 00:08:04.255 9.846 - 9.895: 97.7443% ( 1) 00:08:04.255 9.945 - 9.994: 97.7606% ( 3) 00:08:04.255 9.994 - 10.043: 97.7715% ( 2) 00:08:04.255 10.043 - 10.092: 97.7769% ( 1) 00:08:04.255 10.142 - 10.191: 97.7878% ( 2) 00:08:04.255 10.191 - 10.240: 97.7932% ( 1) 00:08:04.255 10.240 - 10.289: 97.7987% ( 1) 00:08:04.255 10.338 - 10.388: 97.8150% ( 3) 00:08:04.255 10.388 - 10.437: 97.8259% ( 2) 00:08:04.255 10.437 - 10.486: 97.8313% ( 1) 00:08:04.255 10.535 - 10.585: 97.8639% ( 6) 00:08:04.255 10.585 - 10.634: 97.8748% ( 2) 00:08:04.255 10.634 - 10.683: 97.8965% ( 4) 00:08:04.255 10.683 - 10.732: 97.9019% ( 1) 00:08:04.255 10.732 - 10.782: 97.9128% ( 2) 00:08:04.255 10.782 - 10.831: 97.9291% ( 3) 00:08:04.255 10.831 - 10.880: 97.9400% ( 2) 00:08:04.255 10.880 - 10.929: 97.9509% ( 2) 00:08:04.255 10.929 - 10.978: 97.9617% ( 2) 00:08:04.255 10.978 - 11.028: 97.9780% ( 3) 00:08:04.255 11.028 - 11.077: 97.9835% ( 1) 00:08:04.255 11.077 - 11.126: 98.0107% ( 5) 00:08:04.255 11.175 - 11.225: 98.0161% ( 1) 00:08:04.255 11.225 - 11.274: 98.0215% ( 1) 00:08:04.255 11.422 - 11.471: 98.0270% ( 1) 00:08:04.255 11.471 - 11.520: 98.0324% ( 1) 00:08:04.255 12.012 - 12.062: 98.0378% ( 1) 00:08:04.255 12.308 - 12.357: 98.0433% ( 1) 00:08:04.255 12.406 - 12.455: 98.0487% ( 1) 00:08:04.255 12.505 - 12.554: 98.0541% ( 1) 00:08:04.255 12.800 - 12.898: 98.1031% ( 9) 00:08:04.255 12.898 - 12.997: 98.1737% ( 13) 00:08:04.255 12.997 - 13.095: 98.3042% ( 24) 00:08:04.255 13.095 - 13.194: 98.3966% ( 17) 00:08:04.255 13.194 - 13.292: 98.5107% ( 21) 00:08:04.255 13.292 - 13.391: 98.5705% ( 11) 00:08:04.255 13.391 - 13.489: 98.6683% ( 18) 00:08:04.255 13.489 - 13.588: 98.7499% ( 15) 00:08:04.255 13.588 - 13.686: 98.8694% ( 22) 00:08:04.255 13.686 - 13.785: 98.9455% ( 14) 00:08:04.255 13.785 - 13.883: 99.0053% ( 11) 00:08:04.255 13.883 - 13.982: 99.0597% ( 10) 00:08:04.255 13.982 - 14.080: 99.0923% ( 6) 00:08:04.255 14.080 - 14.178: 99.1358% ( 8) 00:08:04.255 14.178 - 14.277: 99.1684% ( 6) 00:08:04.255 14.277 - 14.375: 99.2010% ( 6) 00:08:04.255 14.375 - 14.474: 99.2608% ( 11) 00:08:04.255 14.474 - 14.572: 99.2988% ( 7) 00:08:04.255 14.572 - 14.671: 99.3151% ( 3) 00:08:04.255 14.671 - 14.769: 99.3369% ( 4) 00:08:04.255 14.769 - 14.868: 99.3478% ( 2) 00:08:04.255 14.868 - 14.966: 99.3586% ( 2) 00:08:04.255 14.966 - 15.065: 99.3641% ( 1) 00:08:04.255 15.065 - 15.163: 99.3695% ( 1) 00:08:04.255 15.163 - 15.262: 99.3804% ( 2) 00:08:04.256 15.360 - 15.458: 99.3858% ( 1) 00:08:04.256 15.458 - 15.557: 99.3912% ( 1) 00:08:04.256 15.557 - 15.655: 99.4021% ( 2) 00:08:04.256 15.951 - 16.049: 99.4075% ( 1) 00:08:04.256 16.049 - 16.148: 99.4293% ( 4) 00:08:04.256 16.345 - 16.443: 99.4510% ( 4) 00:08:04.256 16.640 - 16.738: 99.4619% ( 2) 00:08:04.256 16.738 - 16.837: 99.4782% ( 3) 00:08:04.256 17.034 - 17.132: 99.4836% ( 1) 00:08:04.256 17.231 - 17.329: 99.4891% ( 1) 00:08:04.256 17.428 - 17.526: 99.4945% ( 1) 00:08:04.256 17.526 - 17.625: 99.4999% ( 1) 00:08:04.256 17.625 - 17.723: 99.5054% ( 1) 00:08:04.256 18.117 - 18.215: 99.5108% ( 1) 00:08:04.256 18.314 - 18.412: 99.5163% ( 1) 00:08:04.256 18.511 - 18.609: 99.5217% ( 1) 00:08:04.256 18.708 - 18.806: 99.5271% ( 1) 00:08:04.256 18.806 - 18.905: 99.5380% ( 2) 00:08:04.256 19.003 - 19.102: 99.5434% ( 1) 00:08:04.256 19.397 - 19.495: 99.5489% ( 1) 00:08:04.256 19.495 - 19.594: 99.5597% ( 2) 00:08:04.256 19.594 - 19.692: 99.5815% ( 4) 00:08:04.256 19.692 - 19.791: 99.6739% ( 17) 00:08:04.256 19.791 - 19.889: 99.7445% ( 13) 00:08:04.256 19.889 - 19.988: 99.8152% ( 13) 00:08:04.256 19.988 - 20.086: 99.8804% ( 12) 00:08:04.256 20.086 - 20.185: 99.8913% ( 2) 00:08:04.256 20.185 - 20.283: 99.9022% ( 2) 00:08:04.256 20.283 - 20.382: 99.9076% ( 1) 00:08:04.256 20.480 - 20.578: 99.9239% ( 3) 00:08:04.256 20.578 - 20.677: 99.9293% ( 1) 00:08:04.256 20.874 - 20.972: 99.9348% ( 1) 00:08:04.256 21.858 - 21.957: 99.9402% ( 1) 00:08:04.256 22.351 - 22.449: 99.9456% ( 1) 00:08:04.256 22.449 - 22.548: 99.9511% ( 1) 00:08:04.256 26.978 - 27.175: 99.9620% ( 2) 00:08:04.256 28.554 - 28.751: 99.9674% ( 1) 00:08:04.256 42.338 - 42.535: 99.9728% ( 1) 00:08:04.256 48.443 - 48.640: 99.9783% ( 1) 00:08:04.256 53.169 - 53.563: 99.9837% ( 1) 00:08:04.256 53.563 - 53.957: 99.9891% ( 1) 00:08:04.256 57.502 - 57.895: 99.9946% ( 1) 00:08:04.256 215.828 - 217.403: 100.0000% ( 1) 00:08:04.256 00:08:04.256 00:08:04.256 real 0m1.206s 00:08:04.256 user 0m1.070s 00:08:04.256 sys 0m0.088s 00:08:04.256 00:53:26 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.256 00:53:26 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:04.256 ************************************ 00:08:04.256 END TEST nvme_overhead 00:08:04.256 ************************************ 00:08:04.256 00:53:26 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:04.256 00:53:26 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:04.256 00:53:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.256 00:53:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.256 ************************************ 00:08:04.256 START TEST nvme_arbitration 00:08:04.256 ************************************ 00:08:04.256 00:53:26 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:07.563 Initializing NVMe Controllers 00:08:07.563 Attached to 0000:00:10.0 00:08:07.563 Attached to 0000:00:11.0 00:08:07.563 Attached to 0000:00:13.0 00:08:07.563 Attached to 0000:00:12.0 00:08:07.563 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:07.563 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:07.563 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:07.563 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:07.563 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:07.563 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:07.563 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:07.564 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:07.564 Initialization complete. Launching workers. 00:08:07.564 Starting thread on core 1 with urgent priority queue 00:08:07.564 Starting thread on core 2 with urgent priority queue 00:08:07.564 Starting thread on core 3 with urgent priority queue 00:08:07.564 Starting thread on core 0 with urgent priority queue 00:08:07.564 QEMU NVMe Ctrl (12340 ) core 0: 6229.33 IO/s 16.05 secs/100000 ios 00:08:07.564 QEMU NVMe Ctrl (12342 ) core 0: 6236.67 IO/s 16.03 secs/100000 ios 00:08:07.564 QEMU NVMe Ctrl (12341 ) core 1: 6364.67 IO/s 15.71 secs/100000 ios 00:08:07.564 QEMU NVMe Ctrl (12342 ) core 1: 6381.00 IO/s 15.67 secs/100000 ios 00:08:07.564 QEMU NVMe Ctrl (12343 ) core 2: 6082.00 IO/s 16.44 secs/100000 ios 00:08:07.564 QEMU NVMe Ctrl (12342 ) core 3: 6058.33 IO/s 16.51 secs/100000 ios 00:08:07.564 ======================================================== 00:08:07.564 00:08:07.564 00:08:07.564 real 0m3.243s 00:08:07.564 user 0m9.028s 00:08:07.564 sys 0m0.121s 00:08:07.564 00:53:30 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.564 00:53:30 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:07.564 ************************************ 00:08:07.564 END TEST nvme_arbitration 00:08:07.564 ************************************ 00:08:07.564 00:53:30 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:07.564 00:53:30 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:07.564 00:53:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.564 00:53:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.564 ************************************ 00:08:07.564 START TEST nvme_single_aen 00:08:07.564 ************************************ 00:08:07.564 00:53:30 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:07.564 Asynchronous Event Request test 00:08:07.564 Attached to 0000:00:10.0 00:08:07.564 Attached to 0000:00:11.0 00:08:07.564 Attached to 0000:00:13.0 00:08:07.564 Attached to 0000:00:12.0 00:08:07.564 Reset controller to setup AER completions for this process 00:08:07.564 Registering asynchronous event callbacks... 00:08:07.564 Getting orig temperature thresholds of all controllers 00:08:07.564 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:07.564 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:07.564 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:07.564 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:07.564 Setting all controllers temperature threshold low to trigger AER 00:08:07.564 Waiting for all controllers temperature threshold to be set lower 00:08:07.564 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:07.564 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:07.564 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:07.564 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:07.564 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:07.564 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:07.564 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:07.564 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:07.564 Waiting for all controllers to trigger AER and reset threshold 00:08:07.564 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.564 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.564 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.564 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:07.564 Cleaning up... 00:08:07.564 00:08:07.564 real 0m0.191s 00:08:07.564 user 0m0.071s 00:08:07.564 sys 0m0.090s 00:08:07.564 00:53:30 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.564 ************************************ 00:08:07.564 END TEST nvme_single_aen 00:08:07.564 ************************************ 00:08:07.564 00:53:30 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:07.564 00:53:30 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:07.564 00:53:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.564 00:53:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.564 00:53:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.564 ************************************ 00:08:07.564 START TEST nvme_doorbell_aers 00:08:07.564 ************************************ 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:07.564 00:53:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:07.822 00:53:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:07.822 00:53:30 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:07.822 00:53:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:07.822 00:53:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:07.823 [2024-11-26 00:53:30.707132] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:17.787 Executing: test_write_invalid_db 00:08:17.787 Waiting for AER completion... 00:08:17.787 Failure: test_write_invalid_db 00:08:17.787 00:08:17.787 Executing: test_invalid_db_write_overflow_sq 00:08:17.787 Waiting for AER completion... 00:08:17.787 Failure: test_invalid_db_write_overflow_sq 00:08:17.787 00:08:17.787 Executing: test_invalid_db_write_overflow_cq 00:08:17.787 Waiting for AER completion... 00:08:17.787 Failure: test_invalid_db_write_overflow_cq 00:08:17.787 00:08:17.787 00:53:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:17.787 00:53:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:18.046 [2024-11-26 00:53:40.765222] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:28.013 Executing: test_write_invalid_db 00:08:28.013 Waiting for AER completion... 00:08:28.013 Failure: test_write_invalid_db 00:08:28.013 00:08:28.013 Executing: test_invalid_db_write_overflow_sq 00:08:28.013 Waiting for AER completion... 00:08:28.013 Failure: test_invalid_db_write_overflow_sq 00:08:28.013 00:08:28.013 Executing: test_invalid_db_write_overflow_cq 00:08:28.013 Waiting for AER completion... 00:08:28.013 Failure: test_invalid_db_write_overflow_cq 00:08:28.013 00:08:28.013 00:53:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:28.013 00:53:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:28.013 [2024-11-26 00:53:50.788776] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:37.980 Executing: test_write_invalid_db 00:08:37.980 Waiting for AER completion... 00:08:37.980 Failure: test_write_invalid_db 00:08:37.980 00:08:37.980 Executing: test_invalid_db_write_overflow_sq 00:08:37.980 Waiting for AER completion... 00:08:37.980 Failure: test_invalid_db_write_overflow_sq 00:08:37.980 00:08:37.980 Executing: test_invalid_db_write_overflow_cq 00:08:37.980 Waiting for AER completion... 00:08:37.980 Failure: test_invalid_db_write_overflow_cq 00:08:37.980 00:08:37.980 00:54:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:37.980 00:54:00 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:37.980 [2024-11-26 00:54:00.832309] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 Executing: test_write_invalid_db 00:08:47.945 Waiting for AER completion... 00:08:47.945 Failure: test_write_invalid_db 00:08:47.945 00:08:47.945 Executing: test_invalid_db_write_overflow_sq 00:08:47.945 Waiting for AER completion... 00:08:47.945 Failure: test_invalid_db_write_overflow_sq 00:08:47.945 00:08:47.945 Executing: test_invalid_db_write_overflow_cq 00:08:47.945 Waiting for AER completion... 00:08:47.945 Failure: test_invalid_db_write_overflow_cq 00:08:47.945 00:08:47.945 00:08:47.945 real 0m40.175s 00:08:47.945 user 0m34.153s 00:08:47.945 sys 0m5.631s 00:08:47.945 00:54:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.945 ************************************ 00:08:47.945 END TEST nvme_doorbell_aers 00:08:47.945 ************************************ 00:08:47.945 00:54:10 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:47.945 00:54:10 nvme -- nvme/nvme.sh@97 -- # uname 00:08:47.945 00:54:10 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:47.945 00:54:10 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:47.945 00:54:10 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:47.945 00:54:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.945 00:54:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.945 ************************************ 00:08:47.945 START TEST nvme_multi_aen 00:08:47.945 ************************************ 00:08:47.945 00:54:10 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:47.945 [2024-11-26 00:54:10.847616] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.847677] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.847690] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.848715] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.848734] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.848742] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.849654] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.849676] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.849700] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.850572] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.850594] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:47.945 [2024-11-26 00:54:10.850602] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76611) is not found. Dropping the request. 00:08:48.204 Child process pid: 77137 00:08:48.204 [Child] Asynchronous Event Request test 00:08:48.204 [Child] Attached to 0000:00:10.0 00:08:48.204 [Child] Attached to 0000:00:11.0 00:08:48.204 [Child] Attached to 0000:00:13.0 00:08:48.204 [Child] Attached to 0000:00:12.0 00:08:48.204 [Child] Registering asynchronous event callbacks... 00:08:48.204 [Child] Getting orig temperature thresholds of all controllers 00:08:48.204 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.204 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.204 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.204 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.204 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:48.204 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.204 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.204 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.204 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.204 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.204 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.204 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.204 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.204 [Child] Cleaning up... 00:08:48.204 Asynchronous Event Request test 00:08:48.204 Attached to 0000:00:10.0 00:08:48.204 Attached to 0000:00:11.0 00:08:48.204 Attached to 0000:00:13.0 00:08:48.204 Attached to 0000:00:12.0 00:08:48.204 Reset controller to setup AER completions for this process 00:08:48.204 Registering asynchronous event callbacks... 00:08:48.204 Getting orig temperature thresholds of all controllers 00:08:48.204 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.204 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.204 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.204 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:48.204 Setting all controllers temperature threshold low to trigger AER 00:08:48.204 Waiting for all controllers temperature threshold to be set lower 00:08:48.204 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.204 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:48.204 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.204 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:48.204 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.204 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:48.204 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:48.204 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:48.204 Waiting for all controllers to trigger AER and reset threshold 00:08:48.204 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.204 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.204 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.204 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.204 Cleaning up... 00:08:48.204 00:08:48.204 real 0m0.388s 00:08:48.204 user 0m0.129s 00:08:48.204 sys 0m0.165s 00:08:48.204 00:54:11 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.204 ************************************ 00:08:48.204 END TEST nvme_multi_aen 00:08:48.204 00:54:11 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:48.204 ************************************ 00:08:48.204 00:54:11 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:48.204 00:54:11 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:48.204 00:54:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.204 00:54:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.204 ************************************ 00:08:48.204 START TEST nvme_startup 00:08:48.204 ************************************ 00:08:48.204 00:54:11 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:48.462 Initializing NVMe Controllers 00:08:48.462 Attached to 0000:00:10.0 00:08:48.462 Attached to 0000:00:11.0 00:08:48.462 Attached to 0000:00:13.0 00:08:48.462 Attached to 0000:00:12.0 00:08:48.462 Initialization complete. 00:08:48.462 Time used:129925.844 (us). 00:08:48.462 00:08:48.462 real 0m0.186s 00:08:48.462 user 0m0.061s 00:08:48.462 sys 0m0.087s 00:08:48.462 00:54:11 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.462 ************************************ 00:08:48.462 END TEST nvme_startup 00:08:48.462 ************************************ 00:08:48.462 00:54:11 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:48.462 00:54:11 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:48.462 00:54:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.462 00:54:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.462 00:54:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.462 ************************************ 00:08:48.462 START TEST nvme_multi_secondary 00:08:48.462 ************************************ 00:08:48.462 00:54:11 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:48.462 00:54:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77193 00:08:48.462 00:54:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:48.462 00:54:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77194 00:08:48.462 00:54:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:48.462 00:54:11 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:51.744 Initializing NVMe Controllers 00:08:51.744 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.744 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.744 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.744 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.744 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:51.744 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:51.744 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:51.744 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:51.744 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:51.744 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:51.744 Initialization complete. Launching workers. 00:08:51.744 ======================================================== 00:08:51.744 Latency(us) 00:08:51.744 Device Information : IOPS MiB/s Average min max 00:08:51.744 PCIE (0000:00:10.0) NSID 1 from core 2: 3247.02 12.68 4925.83 1011.81 13503.53 00:08:51.744 PCIE (0000:00:11.0) NSID 1 from core 2: 3247.02 12.68 4927.59 1011.87 13119.51 00:08:51.744 PCIE (0000:00:13.0) NSID 1 from core 2: 3247.02 12.68 4927.76 961.33 13158.40 00:08:51.744 PCIE (0000:00:12.0) NSID 1 from core 2: 3247.02 12.68 4927.94 896.25 12581.36 00:08:51.744 PCIE (0000:00:12.0) NSID 2 from core 2: 3247.02 12.68 4927.73 1026.74 12770.73 00:08:51.744 PCIE (0000:00:12.0) NSID 3 from core 2: 3247.02 12.68 4927.83 1045.25 13416.68 00:08:51.744 ======================================================== 00:08:51.744 Total : 19482.12 76.10 4927.45 896.25 13503.53 00:08:51.744 00:08:52.002 Initializing NVMe Controllers 00:08:52.002 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.002 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.002 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.002 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.002 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:52.002 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:52.002 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:52.002 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:52.002 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:52.002 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:52.002 Initialization complete. Launching workers. 00:08:52.002 ======================================================== 00:08:52.002 Latency(us) 00:08:52.002 Device Information : IOPS MiB/s Average min max 00:08:52.002 PCIE (0000:00:10.0) NSID 1 from core 1: 7727.73 30.19 2069.03 1010.44 5942.39 00:08:52.002 PCIE (0000:00:11.0) NSID 1 from core 1: 7727.73 30.19 2069.93 1014.32 6023.14 00:08:52.002 PCIE (0000:00:13.0) NSID 1 from core 1: 7727.73 30.19 2069.89 1023.61 6021.84 00:08:52.002 PCIE (0000:00:12.0) NSID 1 from core 1: 7727.73 30.19 2069.82 887.62 5684.41 00:08:52.002 PCIE (0000:00:12.0) NSID 2 from core 1: 7727.73 30.19 2069.86 937.35 5418.57 00:08:52.002 PCIE (0000:00:12.0) NSID 3 from core 1: 7727.73 30.19 2069.81 945.28 5352.63 00:08:52.002 ======================================================== 00:08:52.002 Total : 46366.36 181.12 2069.72 887.62 6023.14 00:08:52.002 00:08:52.002 00:54:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77193 00:08:53.902 Initializing NVMe Controllers 00:08:53.902 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.902 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.902 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.902 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.902 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:53.902 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:53.902 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:53.902 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:53.902 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:53.902 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:53.902 Initialization complete. Launching workers. 00:08:53.902 ======================================================== 00:08:53.902 Latency(us) 00:08:53.902 Device Information : IOPS MiB/s Average min max 00:08:53.902 PCIE (0000:00:10.0) NSID 1 from core 0: 10903.53 42.59 1466.16 686.30 5266.56 00:08:53.902 PCIE (0000:00:11.0) NSID 1 from core 0: 10903.53 42.59 1467.02 704.99 5600.87 00:08:53.902 PCIE (0000:00:13.0) NSID 1 from core 0: 10903.53 42.59 1466.99 556.28 6041.61 00:08:53.902 PCIE (0000:00:12.0) NSID 1 from core 0: 10903.53 42.59 1466.97 493.18 5923.95 00:08:53.902 PCIE (0000:00:12.0) NSID 2 from core 0: 10903.53 42.59 1466.95 405.21 6339.55 00:08:53.902 PCIE (0000:00:12.0) NSID 3 from core 0: 10903.53 42.59 1466.92 326.96 5609.70 00:08:53.902 ======================================================== 00:08:53.902 Total : 65421.21 255.55 1466.84 326.96 6339.55 00:08:53.902 00:08:53.902 00:54:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77194 00:08:53.903 00:54:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77263 00:08:53.903 00:54:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:53.903 00:54:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77264 00:08:53.903 00:54:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:53.903 00:54:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:57.241 Initializing NVMe Controllers 00:08:57.241 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.241 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.241 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.241 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.241 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:57.241 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:57.241 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:57.241 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:57.241 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:57.241 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:57.241 Initialization complete. Launching workers. 00:08:57.241 ======================================================== 00:08:57.241 Latency(us) 00:08:57.241 Device Information : IOPS MiB/s Average min max 00:08:57.241 PCIE (0000:00:10.0) NSID 1 from core 0: 7069.52 27.62 2261.79 695.92 11346.77 00:08:57.241 PCIE (0000:00:11.0) NSID 1 from core 0: 7069.52 27.62 2263.94 718.12 11078.02 00:08:57.241 PCIE (0000:00:13.0) NSID 1 from core 0: 7069.52 27.62 2264.59 724.27 11219.71 00:08:57.241 PCIE (0000:00:12.0) NSID 1 from core 0: 7069.52 27.62 2264.57 726.07 11559.07 00:08:57.241 PCIE (0000:00:12.0) NSID 2 from core 0: 7069.52 27.62 2265.20 726.78 12684.66 00:08:57.241 PCIE (0000:00:12.0) NSID 3 from core 0: 7069.52 27.62 2265.18 719.05 12434.42 00:08:57.241 ======================================================== 00:08:57.241 Total : 42417.14 165.69 2264.21 695.92 12684.66 00:08:57.241 00:08:57.241 Initializing NVMe Controllers 00:08:57.241 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.241 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.241 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.241 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.241 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:57.241 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:57.241 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:57.241 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:57.241 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:57.241 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:57.241 Initialization complete. Launching workers. 00:08:57.241 ======================================================== 00:08:57.241 Latency(us) 00:08:57.241 Device Information : IOPS MiB/s Average min max 00:08:57.241 PCIE (0000:00:10.0) NSID 1 from core 1: 6675.62 26.08 2395.30 1000.48 13582.22 00:08:57.241 PCIE (0000:00:11.0) NSID 1 from core 1: 6675.62 26.08 2396.56 1026.22 13290.59 00:08:57.241 PCIE (0000:00:13.0) NSID 1 from core 1: 6675.62 26.08 2396.47 1036.83 13209.01 00:08:57.241 PCIE (0000:00:12.0) NSID 1 from core 1: 6675.62 26.08 2396.39 1035.00 12542.44 00:08:57.241 PCIE (0000:00:12.0) NSID 2 from core 1: 6675.62 26.08 2396.29 1045.10 16300.75 00:08:57.241 PCIE (0000:00:12.0) NSID 3 from core 1: 6675.62 26.08 2396.63 1019.30 17264.28 00:08:57.241 ======================================================== 00:08:57.241 Total : 40053.74 156.46 2396.27 1000.48 17264.28 00:08:57.241 00:08:59.155 Initializing NVMe Controllers 00:08:59.155 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.155 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.155 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.155 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.155 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:59.155 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:59.155 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:59.155 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:59.155 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:59.155 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:59.155 Initialization complete. Launching workers. 00:08:59.155 ======================================================== 00:08:59.155 Latency(us) 00:08:59.155 Device Information : IOPS MiB/s Average min max 00:08:59.156 PCIE (0000:00:10.0) NSID 1 from core 2: 2626.78 10.26 6089.39 924.33 25293.61 00:08:59.156 PCIE (0000:00:11.0) NSID 1 from core 2: 2626.78 10.26 6090.20 928.16 28673.20 00:08:59.156 PCIE (0000:00:13.0) NSID 1 from core 2: 2626.78 10.26 6090.70 945.33 24907.29 00:08:59.156 PCIE (0000:00:12.0) NSID 1 from core 2: 2626.78 10.26 6090.56 954.38 28929.69 00:08:59.156 PCIE (0000:00:12.0) NSID 2 from core 2: 2626.78 10.26 6090.11 943.54 27516.76 00:08:59.156 PCIE (0000:00:12.0) NSID 3 from core 2: 2626.78 10.26 6090.24 950.03 23576.17 00:08:59.156 ======================================================== 00:08:59.156 Total : 15760.71 61.57 6090.20 924.33 28929.69 00:08:59.156 00:08:59.156 ************************************ 00:08:59.156 END TEST nvme_multi_secondary 00:08:59.156 ************************************ 00:08:59.156 00:54:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77263 00:08:59.156 00:54:21 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77264 00:08:59.156 00:08:59.156 real 0m10.652s 00:08:59.156 user 0m18.320s 00:08:59.156 sys 0m0.670s 00:08:59.156 00:54:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.156 00:54:21 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:59.156 00:54:22 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:59.156 00:54:22 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:59.156 00:54:22 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/76226 ]] 00:08:59.156 00:54:22 nvme -- common/autotest_common.sh@1094 -- # kill 76226 00:08:59.156 00:54:22 nvme -- common/autotest_common.sh@1095 -- # wait 76226 00:08:59.156 [2024-11-26 00:54:22.031039] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.031121] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.031149] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.031166] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.032747] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.032820] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.032867] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.032886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.033987] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.034054] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.034082] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.034101] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.035632] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.035702] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.035727] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.156 [2024-11-26 00:54:22.035744] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77136) is not found. Dropping the request. 00:08:59.415 00:54:22 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:59.416 00:54:22 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:59.416 00:54:22 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:59.416 00:54:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.416 00:54:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.416 00:54:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.416 ************************************ 00:08:59.416 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:59.416 ************************************ 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:59.416 * Looking for test storage... 00:08:59.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:59.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.416 --rc genhtml_branch_coverage=1 00:08:59.416 --rc genhtml_function_coverage=1 00:08:59.416 --rc genhtml_legend=1 00:08:59.416 --rc geninfo_all_blocks=1 00:08:59.416 --rc geninfo_unexecuted_blocks=1 00:08:59.416 00:08:59.416 ' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:59.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.416 --rc genhtml_branch_coverage=1 00:08:59.416 --rc genhtml_function_coverage=1 00:08:59.416 --rc genhtml_legend=1 00:08:59.416 --rc geninfo_all_blocks=1 00:08:59.416 --rc geninfo_unexecuted_blocks=1 00:08:59.416 00:08:59.416 ' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:59.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.416 --rc genhtml_branch_coverage=1 00:08:59.416 --rc genhtml_function_coverage=1 00:08:59.416 --rc genhtml_legend=1 00:08:59.416 --rc geninfo_all_blocks=1 00:08:59.416 --rc geninfo_unexecuted_blocks=1 00:08:59.416 00:08:59.416 ' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:59.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.416 --rc genhtml_branch_coverage=1 00:08:59.416 --rc genhtml_function_coverage=1 00:08:59.416 --rc genhtml_legend=1 00:08:59.416 --rc geninfo_all_blocks=1 00:08:59.416 --rc geninfo_unexecuted_blocks=1 00:08:59.416 00:08:59.416 ' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77419 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77419 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 77419 ']' 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:59.416 00:54:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:59.675 [2024-11-26 00:54:22.381337] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:08:59.675 [2024-11-26 00:54:22.381449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77419 ] 00:08:59.675 [2024-11-26 00:54:22.522619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:59.675 [2024-11-26 00:54:22.552805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:59.675 [2024-11-26 00:54:22.573528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.675 [2024-11-26 00:54:22.574188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.675 [2024-11-26 00:54:22.574196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:59.675 [2024-11-26 00:54:22.574278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.611 nvme0n1 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_O64ck.txt 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:00.611 true 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732582463 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77442 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:00.611 00:54:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:02.518 [2024-11-26 00:54:25.321510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:02.518 [2024-11-26 00:54:25.322045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:02.518 [2024-11-26 00:54:25.322091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:02.518 [2024-11-26 00:54:25.322105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:02.518 [2024-11-26 00:54:25.325163] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.518 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77442 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77442 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77442 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_O64ck.txt 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:02.518 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_O64ck.txt 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77419 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 77419 ']' 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 77419 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.519 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77419 00:09:02.780 killing process with pid 77419 00:09:02.780 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.780 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.780 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77419' 00:09:02.780 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 77419 00:09:02.780 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 77419 00:09:03.042 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:03.042 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:03.042 00:09:03.042 real 0m3.586s 00:09:03.042 user 0m12.921s 00:09:03.042 sys 0m0.458s 00:09:03.042 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.042 00:54:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:03.042 ************************************ 00:09:03.042 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:03.042 ************************************ 00:09:03.042 00:54:25 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:03.042 00:54:25 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:03.042 00:54:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:03.042 00:54:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.042 00:54:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:03.042 ************************************ 00:09:03.042 START TEST nvme_fio 00:09:03.042 ************************************ 00:09:03.042 00:54:25 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:03.042 00:54:25 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:03.042 00:54:25 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:03.042 00:54:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:03.042 00:54:25 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:03.042 00:54:25 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:03.042 00:54:25 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:03.042 00:54:25 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:03.042 00:54:25 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:03.042 00:54:25 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:03.042 00:54:25 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:03.042 00:54:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:03.042 00:54:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:03.042 00:54:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:03.042 00:54:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:03.042 00:54:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:03.302 00:54:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:03.302 00:54:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:03.564 00:54:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:03.564 00:54:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:03.564 00:54:26 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:03.825 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.825 fio-3.35 00:09:03.825 Starting 1 thread 00:09:09.194 00:09:09.194 test: (groupid=0, jobs=1): err= 0: pid=77571: Tue Nov 26 00:54:31 2024 00:09:09.194 read: IOPS=19.0k, BW=74.4MiB/s (78.0MB/s)(149MiB/2001msec) 00:09:09.194 slat (nsec): min=4221, max=81285, avg=5651.79, stdev=3126.83 00:09:09.194 clat (usec): min=670, max=9531, avg=3338.23, stdev=1239.05 00:09:09.194 lat (usec): min=682, max=9541, avg=3343.89, stdev=1240.47 00:09:09.194 clat percentiles (usec): 00:09:09.194 | 1.00th=[ 1926], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:09:09.194 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 3097], 00:09:09.194 | 70.00th=[ 3490], 80.00th=[ 4293], 90.00th=[ 5276], 95.00th=[ 5997], 00:09:09.194 | 99.00th=[ 7111], 99.50th=[ 7570], 99.90th=[ 8979], 99.95th=[ 9110], 00:09:09.194 | 99.99th=[ 9372] 00:09:09.194 bw ( KiB/s): min=75784, max=79336, per=100.00%, avg=77546.67, stdev=1776.15, samples=3 00:09:09.194 iops : min=18946, max=19834, avg=19386.67, stdev=444.04, samples=3 00:09:09.194 write: IOPS=19.0k, BW=74.3MiB/s (77.9MB/s)(149MiB/2001msec); 0 zone resets 00:09:09.194 slat (usec): min=4, max=149, avg= 5.74, stdev= 3.23 00:09:09.194 clat (usec): min=601, max=9597, avg=3365.77, stdev=1238.22 00:09:09.194 lat (usec): min=614, max=9611, avg=3371.51, stdev=1239.61 00:09:09.194 clat percentiles (usec): 00:09:09.194 | 1.00th=[ 1942], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:09.194 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2868], 60.00th=[ 3130], 00:09:09.194 | 70.00th=[ 3556], 80.00th=[ 4293], 90.00th=[ 5276], 95.00th=[ 6063], 00:09:09.194 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8979], 99.95th=[ 9110], 00:09:09.194 | 99.99th=[ 9503] 00:09:09.194 bw ( KiB/s): min=76184, max=79680, per=100.00%, avg=77736.00, stdev=1780.66, samples=3 00:09:09.194 iops : min=19046, max=19920, avg=19434.00, stdev=445.17, samples=3 00:09:09.194 lat (usec) : 750=0.01%, 1000=0.01% 00:09:09.194 lat (msec) : 2=1.40%, 4=75.40%, 10=23.18% 00:09:09.194 cpu : usr=98.80%, sys=0.20%, ctx=5, majf=0, minf=623 00:09:09.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:09.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:09.194 issued rwts: total=38089,38072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:09.194 00:09:09.194 Run status group 0 (all jobs): 00:09:09.194 READ: bw=74.4MiB/s (78.0MB/s), 74.4MiB/s-74.4MiB/s (78.0MB/s-78.0MB/s), io=149MiB (156MB), run=2001-2001msec 00:09:09.194 WRITE: bw=74.3MiB/s (77.9MB/s), 74.3MiB/s-74.3MiB/s (77.9MB/s-77.9MB/s), io=149MiB (156MB), run=2001-2001msec 00:09:09.194 ----------------------------------------------------- 00:09:09.194 Suppressions used: 00:09:09.194 count bytes template 00:09:09.194 1 32 /usr/src/fio/parse.c 00:09:09.194 1 8 libtcmalloc_minimal.so 00:09:09.194 ----------------------------------------------------- 00:09:09.194 00:09:09.194 00:54:31 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:09.194 00:54:31 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:09.194 00:54:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:09.194 00:54:31 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:09.455 00:54:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:09.455 00:54:32 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:09.716 00:54:32 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:09.716 00:54:32 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:09.716 00:54:32 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:09.716 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:09.716 fio-3.35 00:09:09.716 Starting 1 thread 00:09:16.308 00:09:16.308 test: (groupid=0, jobs=1): err= 0: pid=77627: Tue Nov 26 00:54:38 2024 00:09:16.308 read: IOPS=19.7k, BW=76.9MiB/s (80.7MB/s)(154MiB/2001msec) 00:09:16.308 slat (usec): min=4, max=789, avg= 5.37, stdev= 4.79 00:09:16.308 clat (usec): min=694, max=12214, avg=3234.81, stdev=1142.67 00:09:16.308 lat (usec): min=707, max=12245, avg=3240.18, stdev=1143.89 00:09:16.308 clat percentiles (usec): 00:09:16.308 | 1.00th=[ 1926], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2442], 00:09:16.308 | 30.00th=[ 2507], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2966], 00:09:16.308 | 70.00th=[ 3261], 80.00th=[ 4047], 90.00th=[ 5080], 95.00th=[ 5669], 00:09:16.308 | 99.00th=[ 6849], 99.50th=[ 7308], 99.90th=[ 8356], 99.95th=[ 9241], 00:09:16.308 | 99.99th=[11863] 00:09:16.308 bw ( KiB/s): min=74832, max=80248, per=98.93%, avg=77922.67, stdev=2787.93, samples=3 00:09:16.308 iops : min=18708, max=20062, avg=19480.67, stdev=696.98, samples=3 00:09:16.308 write: IOPS=19.7k, BW=76.8MiB/s (80.5MB/s)(154MiB/2001msec); 0 zone resets 00:09:16.308 slat (nsec): min=4276, max=79040, avg=5471.57, stdev=2606.44 00:09:16.308 clat (usec): min=669, max=11872, avg=3252.90, stdev=1146.01 00:09:16.308 lat (usec): min=682, max=11881, avg=3258.37, stdev=1147.11 00:09:16.308 clat percentiles (usec): 00:09:16.308 | 1.00th=[ 1958], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:16.308 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2835], 60.00th=[ 2999], 00:09:16.308 | 70.00th=[ 3294], 80.00th=[ 4080], 90.00th=[ 5080], 95.00th=[ 5735], 00:09:16.308 | 99.00th=[ 6849], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 9503], 00:09:16.308 | 99.99th=[11731] 00:09:16.308 bw ( KiB/s): min=75136, max=80672, per=99.31%, avg=78096.00, stdev=2787.91, samples=3 00:09:16.308 iops : min=18784, max=20168, avg=19524.00, stdev=696.98, samples=3 00:09:16.308 lat (usec) : 750=0.01%, 1000=0.01% 00:09:16.308 lat (msec) : 2=1.41%, 4=78.03%, 10=20.52%, 20=0.04% 00:09:16.308 cpu : usr=98.50%, sys=0.35%, ctx=4, majf=0, minf=623 00:09:16.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:16.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.308 issued rwts: total=39402,39338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.308 00:09:16.308 Run status group 0 (all jobs): 00:09:16.308 READ: bw=76.9MiB/s (80.7MB/s), 76.9MiB/s-76.9MiB/s (80.7MB/s-80.7MB/s), io=154MiB (161MB), run=2001-2001msec 00:09:16.308 WRITE: bw=76.8MiB/s (80.5MB/s), 76.8MiB/s-76.8MiB/s (80.5MB/s-80.5MB/s), io=154MiB (161MB), run=2001-2001msec 00:09:16.308 ----------------------------------------------------- 00:09:16.308 Suppressions used: 00:09:16.308 count bytes template 00:09:16.308 1 32 /usr/src/fio/parse.c 00:09:16.308 1 8 libtcmalloc_minimal.so 00:09:16.308 ----------------------------------------------------- 00:09:16.308 00:09:16.308 00:54:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:16.308 00:54:38 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.308 00:54:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.308 00:54:38 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:16.308 00:54:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:16.308 00:54:38 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.308 00:54:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.308 00:54:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:16.308 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:16.308 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:16.308 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.308 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.309 00:54:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:16.637 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:16.637 fio-3.35 00:09:16.637 Starting 1 thread 00:09:21.933 00:09:21.933 test: (groupid=0, jobs=1): err= 0: pid=77688: Tue Nov 26 00:54:44 2024 00:09:21.933 read: IOPS=18.6k, BW=72.5MiB/s (76.1MB/s)(145MiB/2001msec) 00:09:21.933 slat (nsec): min=3391, max=79088, avg=5614.68, stdev=3063.70 00:09:21.933 clat (usec): min=271, max=9649, avg=3421.28, stdev=1228.46 00:09:21.933 lat (usec): min=276, max=9663, avg=3426.89, stdev=1229.68 00:09:21.933 clat percentiles (usec): 00:09:21.933 | 1.00th=[ 1844], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:09:21.933 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2933], 60.00th=[ 3228], 00:09:21.933 | 70.00th=[ 3752], 80.00th=[ 4490], 90.00th=[ 5276], 95.00th=[ 5997], 00:09:21.933 | 99.00th=[ 7111], 99.50th=[ 7570], 99.90th=[ 8455], 99.95th=[ 8848], 00:09:21.933 | 99.99th=[ 9241] 00:09:21.933 bw ( KiB/s): min=67840, max=83776, per=100.00%, avg=75264.00, stdev=8023.52, samples=3 00:09:21.933 iops : min=16960, max=20944, avg=18816.00, stdev=2005.88, samples=3 00:09:21.933 write: IOPS=18.6k, BW=72.6MiB/s (76.2MB/s)(145MiB/2001msec); 0 zone resets 00:09:21.933 slat (nsec): min=3526, max=83006, avg=5728.20, stdev=2970.13 00:09:21.933 clat (usec): min=233, max=9591, avg=3444.48, stdev=1236.72 00:09:21.933 lat (usec): min=239, max=9596, avg=3450.21, stdev=1237.91 00:09:21.933 clat percentiles (usec): 00:09:21.933 | 1.00th=[ 1860], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:21.933 | 30.00th=[ 2606], 40.00th=[ 2769], 50.00th=[ 2966], 60.00th=[ 3228], 00:09:21.933 | 70.00th=[ 3752], 80.00th=[ 4490], 90.00th=[ 5342], 95.00th=[ 6063], 00:09:21.933 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8356], 99.95th=[ 8848], 00:09:21.933 | 99.99th=[ 9372] 00:09:21.933 bw ( KiB/s): min=68056, max=83640, per=100.00%, avg=75306.67, stdev=7848.21, samples=3 00:09:21.933 iops : min=17014, max=20910, avg=18826.67, stdev=1962.05, samples=3 00:09:21.933 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.05% 00:09:21.933 lat (msec) : 2=1.46%, 4=71.97%, 10=26.50% 00:09:21.933 cpu : usr=98.65%, sys=0.30%, ctx=4, majf=0, minf=624 00:09:21.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:21.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.933 issued rwts: total=37164,37203,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.933 00:09:21.933 Run status group 0 (all jobs): 00:09:21.933 READ: bw=72.5MiB/s (76.1MB/s), 72.5MiB/s-72.5MiB/s (76.1MB/s-76.1MB/s), io=145MiB (152MB), run=2001-2001msec 00:09:21.933 WRITE: bw=72.6MiB/s (76.2MB/s), 72.6MiB/s-72.6MiB/s (76.2MB/s-76.2MB/s), io=145MiB (152MB), run=2001-2001msec 00:09:22.194 ----------------------------------------------------- 00:09:22.194 Suppressions used: 00:09:22.194 count bytes template 00:09:22.194 1 32 /usr/src/fio/parse.c 00:09:22.194 1 8 libtcmalloc_minimal.so 00:09:22.194 ----------------------------------------------------- 00:09:22.194 00:09:22.194 00:54:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:22.194 00:54:45 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:22.194 00:54:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:22.194 00:54:45 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:22.455 00:54:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:22.455 00:54:45 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:22.716 00:54:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:22.716 00:54:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:22.716 00:54:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:22.977 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:22.977 fio-3.35 00:09:22.977 Starting 1 thread 00:09:28.322 00:09:28.322 test: (groupid=0, jobs=1): err= 0: pid=77759: Tue Nov 26 00:54:50 2024 00:09:28.322 read: IOPS=22.6k, BW=88.3MiB/s (92.6MB/s)(177MiB/2001msec) 00:09:28.322 slat (nsec): min=4241, max=91719, avg=5082.09, stdev=2283.50 00:09:28.322 clat (usec): min=204, max=14463, avg=2820.13, stdev=1036.62 00:09:28.322 lat (usec): min=208, max=14511, avg=2825.21, stdev=1037.87 00:09:28.322 clat percentiles (usec): 00:09:28.322 | 1.00th=[ 1926], 5.00th=[ 2245], 10.00th=[ 2278], 20.00th=[ 2311], 00:09:28.322 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:28.322 | 70.00th=[ 2540], 80.00th=[ 2868], 90.00th=[ 4424], 95.00th=[ 5342], 00:09:28.322 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[ 8291], 99.95th=[11076], 00:09:28.322 | 99.99th=[14222] 00:09:28.322 bw ( KiB/s): min=62432, max=97960, per=94.45%, avg=85426.67, stdev=19940.90, samples=3 00:09:28.322 iops : min=15608, max=24490, avg=21356.67, stdev=4985.22, samples=3 00:09:28.322 write: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec); 0 zone resets 00:09:28.322 slat (nsec): min=4316, max=84400, avg=5359.22, stdev=2379.13 00:09:28.322 clat (usec): min=243, max=14319, avg=2836.44, stdev=1051.00 00:09:28.322 lat (usec): min=247, max=14334, avg=2841.80, stdev=1052.28 00:09:28.322 clat percentiles (usec): 00:09:28.322 | 1.00th=[ 1942], 5.00th=[ 2278], 10.00th=[ 2311], 20.00th=[ 2343], 00:09:28.322 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:28.322 | 70.00th=[ 2540], 80.00th=[ 2933], 90.00th=[ 4490], 95.00th=[ 5407], 00:09:28.322 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 8455], 99.95th=[11863], 00:09:28.322 | 99.99th=[13960] 00:09:28.322 bw ( KiB/s): min=62816, max=97808, per=95.15%, avg=85592.00, stdev=19741.97, samples=3 00:09:28.322 iops : min=15704, max=24452, avg=21398.00, stdev=4935.49, samples=3 00:09:28.322 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:09:28.322 lat (msec) : 2=1.19%, 4=86.31%, 10=12.38%, 20=0.06% 00:09:28.322 cpu : usr=99.05%, sys=0.10%, ctx=6, majf=0, minf=623 00:09:28.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:28.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:28.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:28.322 issued rwts: total=45246,44998,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:28.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:28.322 00:09:28.322 Run status group 0 (all jobs): 00:09:28.322 READ: bw=88.3MiB/s (92.6MB/s), 88.3MiB/s-88.3MiB/s (92.6MB/s-92.6MB/s), io=177MiB (185MB), run=2001-2001msec 00:09:28.322 WRITE: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:09:28.322 ----------------------------------------------------- 00:09:28.322 Suppressions used: 00:09:28.322 count bytes template 00:09:28.322 1 32 /usr/src/fio/parse.c 00:09:28.322 1 8 libtcmalloc_minimal.so 00:09:28.322 ----------------------------------------------------- 00:09:28.322 00:09:28.322 00:54:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:28.322 00:54:51 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:28.322 00:09:28.322 real 0m25.286s 00:09:28.322 user 0m16.103s 00:09:28.322 sys 0m16.240s 00:09:28.322 00:54:51 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.322 00:54:51 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:28.322 ************************************ 00:09:28.322 END TEST nvme_fio 00:09:28.322 ************************************ 00:09:28.322 00:09:28.322 real 1m32.771s 00:09:28.322 user 3m31.525s 00:09:28.322 sys 0m26.493s 00:09:28.322 00:54:51 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.322 ************************************ 00:09:28.322 END TEST nvme 00:09:28.322 ************************************ 00:09:28.322 00:54:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.322 00:54:51 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:28.322 00:54:51 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:28.322 00:54:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.322 00:54:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.322 00:54:51 -- common/autotest_common.sh@10 -- # set +x 00:09:28.322 ************************************ 00:09:28.322 START TEST nvme_scc 00:09:28.322 ************************************ 00:09:28.322 00:54:51 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:28.322 * Looking for test storage... 00:09:28.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:28.322 00:54:51 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:28.322 00:54:51 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:28.322 00:54:51 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:28.581 00:54:51 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:28.581 00:54:51 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.581 00:54:51 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.581 --rc genhtml_branch_coverage=1 00:09:28.581 --rc genhtml_function_coverage=1 00:09:28.581 --rc genhtml_legend=1 00:09:28.581 --rc geninfo_all_blocks=1 00:09:28.581 --rc geninfo_unexecuted_blocks=1 00:09:28.581 00:09:28.581 ' 00:09:28.581 00:54:51 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.581 --rc genhtml_branch_coverage=1 00:09:28.581 --rc genhtml_function_coverage=1 00:09:28.581 --rc genhtml_legend=1 00:09:28.581 --rc geninfo_all_blocks=1 00:09:28.581 --rc geninfo_unexecuted_blocks=1 00:09:28.581 00:09:28.581 ' 00:09:28.581 00:54:51 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.581 --rc genhtml_branch_coverage=1 00:09:28.581 --rc genhtml_function_coverage=1 00:09:28.581 --rc genhtml_legend=1 00:09:28.581 --rc geninfo_all_blocks=1 00:09:28.581 --rc geninfo_unexecuted_blocks=1 00:09:28.581 00:09:28.581 ' 00:09:28.581 00:54:51 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:28.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.581 --rc genhtml_branch_coverage=1 00:09:28.581 --rc genhtml_function_coverage=1 00:09:28.581 --rc genhtml_legend=1 00:09:28.581 --rc geninfo_all_blocks=1 00:09:28.581 --rc geninfo_unexecuted_blocks=1 00:09:28.581 00:09:28.581 ' 00:09:28.581 00:54:51 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.581 00:54:51 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:28.581 00:54:51 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:28.581 00:54:51 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:28.581 00:54:51 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.581 00:54:51 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.581 00:54:51 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.581 00:54:51 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.581 00:54:51 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.581 00:54:51 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:28.581 00:54:51 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.581 00:54:51 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:28.581 00:54:51 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:28.581 00:54:51 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:28.581 00:54:51 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:28.581 00:54:51 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:28.582 00:54:51 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:28.582 00:54:51 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:28.582 00:54:51 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:28.582 00:54:51 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:28.582 00:54:51 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:28.582 00:54:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:28.582 00:54:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:28.582 00:54:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:28.582 00:54:51 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:28.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.840 Waiting for block devices as requested 00:09:28.840 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.099 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.099 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.099 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.372 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:34.372 00:54:56 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:34.372 00:54:56 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:34.372 00:54:56 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:34.372 00:54:56 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.372 00:54:56 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:56 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:34.372 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.373 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:34.374 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:34.375 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.376 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.377 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:34.378 00:54:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:34.378 00:54:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:34.378 00:54:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.378 00:54:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.378 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.379 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.380 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.381 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.382 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.383 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:34.384 00:54:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:34.384 00:54:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:34.384 00:54:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.384 00:54:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.384 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.385 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.386 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.387 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.388 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.389 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.390 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:34.391 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.392 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.655 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.656 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.657 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.658 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:34.659 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:34.660 00:54:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:34.660 00:54:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:34.660 00:54:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:34.660 00:54:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.660 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:34.661 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.662 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:34.663 00:54:57 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:34.663 00:54:57 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:34.663 00:54:57 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:34.663 00:54:57 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:34.663 00:54:57 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:34.922 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:35.491 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.491 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.491 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.491 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:35.491 00:54:58 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:35.491 00:54:58 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:35.491 00:54:58 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.491 00:54:58 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:35.751 ************************************ 00:09:35.751 START TEST nvme_simple_copy 00:09:35.751 ************************************ 00:09:35.751 00:54:58 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:35.751 Initializing NVMe Controllers 00:09:35.751 Attaching to 0000:00:10.0 00:09:35.751 Controller supports SCC. Attached to 0000:00:10.0 00:09:35.751 Namespace ID: 1 size: 6GB 00:09:35.751 Initialization complete. 00:09:35.751 00:09:35.751 Controller QEMU NVMe Ctrl (12340 ) 00:09:35.751 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:35.751 Namespace Block Size:4096 00:09:35.752 Writing LBAs 0 to 63 with Random Data 00:09:35.752 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:35.752 LBAs matching Written Data: 64 00:09:35.752 00:09:35.752 real 0m0.251s 00:09:35.752 user 0m0.090s 00:09:35.752 sys 0m0.060s 00:09:35.752 ************************************ 00:09:35.752 END TEST nvme_simple_copy 00:09:35.752 ************************************ 00:09:35.752 00:54:58 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.752 00:54:58 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:36.009 ************************************ 00:09:36.009 END TEST nvme_scc 00:09:36.009 ************************************ 00:09:36.009 00:09:36.009 real 0m7.567s 00:09:36.009 user 0m1.056s 00:09:36.009 sys 0m1.368s 00:09:36.009 00:54:58 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.009 00:54:58 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:36.009 00:54:58 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:36.009 00:54:58 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:36.009 00:54:58 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:36.009 00:54:58 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:36.009 00:54:58 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:36.009 00:54:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:36.009 00:54:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.009 00:54:58 -- common/autotest_common.sh@10 -- # set +x 00:09:36.009 ************************************ 00:09:36.009 START TEST nvme_fdp 00:09:36.009 ************************************ 00:09:36.009 00:54:58 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:36.009 * Looking for test storage... 00:09:36.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:36.009 00:54:58 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.009 00:54:58 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.009 00:54:58 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.009 00:54:58 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.009 00:54:58 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:36.009 00:54:58 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.009 00:54:58 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.009 --rc genhtml_branch_coverage=1 00:09:36.009 --rc genhtml_function_coverage=1 00:09:36.009 --rc genhtml_legend=1 00:09:36.009 --rc geninfo_all_blocks=1 00:09:36.009 --rc geninfo_unexecuted_blocks=1 00:09:36.009 00:09:36.009 ' 00:09:36.010 00:54:58 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.010 --rc genhtml_branch_coverage=1 00:09:36.010 --rc genhtml_function_coverage=1 00:09:36.010 --rc genhtml_legend=1 00:09:36.010 --rc geninfo_all_blocks=1 00:09:36.010 --rc geninfo_unexecuted_blocks=1 00:09:36.010 00:09:36.010 ' 00:09:36.010 00:54:58 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.010 --rc genhtml_branch_coverage=1 00:09:36.010 --rc genhtml_function_coverage=1 00:09:36.010 --rc genhtml_legend=1 00:09:36.010 --rc geninfo_all_blocks=1 00:09:36.010 --rc geninfo_unexecuted_blocks=1 00:09:36.010 00:09:36.010 ' 00:09:36.010 00:54:58 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.010 --rc genhtml_branch_coverage=1 00:09:36.010 --rc genhtml_function_coverage=1 00:09:36.010 --rc genhtml_legend=1 00:09:36.010 --rc geninfo_all_blocks=1 00:09:36.010 --rc geninfo_unexecuted_blocks=1 00:09:36.010 00:09:36.010 ' 00:09:36.010 00:54:58 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:36.010 00:54:58 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.010 00:54:58 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.010 00:54:58 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.010 00:54:58 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.010 00:54:58 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.010 00:54:58 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.010 00:54:58 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.010 00:54:58 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:36.010 00:54:58 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:36.010 00:54:58 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:36.010 00:54:58 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:36.010 00:54:58 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:36.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:36.526 Waiting for block devices as requested 00:09:36.526 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.526 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.784 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.784 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.075 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:42.075 00:55:04 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:42.075 00:55:04 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:42.075 00:55:04 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:42.075 00:55:04 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.075 00:55:04 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.075 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:42.076 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:42.077 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:42.078 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:42.079 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.080 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.081 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:42.082 00:55:04 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:42.082 00:55:04 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:42.082 00:55:04 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.082 00:55:04 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:42.082 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:42.083 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.084 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.085 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.086 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:42.087 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:42.088 00:55:04 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:42.088 00:55:04 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:42.088 00:55:04 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.088 00:55:04 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:42.088 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.089 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:42.090 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.091 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.092 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:42.093 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:42.094 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.095 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.096 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.097 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:42.098 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.099 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.100 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:42.101 00:55:04 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:42.101 00:55:04 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:42.101 00:55:04 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:42.101 00:55:04 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.101 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:42.102 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:42.103 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:42.104 00:55:04 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:42.104 00:55:04 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:42.105 00:55:04 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:42.105 00:55:04 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:42.105 00:55:04 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:42.105 00:55:04 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:42.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.930 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:42.930 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.189 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:43.189 00:55:05 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:43.189 00:55:05 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.189 00:55:05 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.189 00:55:05 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:43.189 ************************************ 00:09:43.189 START TEST nvme_flexible_data_placement 00:09:43.189 ************************************ 00:09:43.189 00:55:05 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:43.448 Initializing NVMe Controllers 00:09:43.448 Attaching to 0000:00:13.0 00:09:43.448 Controller supports FDP Attached to 0000:00:13.0 00:09:43.448 Namespace ID: 1 Endurance Group ID: 1 00:09:43.448 Initialization complete. 00:09:43.448 00:09:43.448 ================================== 00:09:43.448 == FDP tests for Namespace: #01 == 00:09:43.448 ================================== 00:09:43.448 00:09:43.448 Get Feature: FDP: 00:09:43.448 ================= 00:09:43.448 Enabled: Yes 00:09:43.448 FDP configuration Index: 0 00:09:43.448 00:09:43.448 FDP configurations log page 00:09:43.448 =========================== 00:09:43.448 Number of FDP configurations: 1 00:09:43.448 Version: 0 00:09:43.448 Size: 112 00:09:43.448 FDP Configuration Descriptor: 0 00:09:43.448 Descriptor Size: 96 00:09:43.448 Reclaim Group Identifier format: 2 00:09:43.448 FDP Volatile Write Cache: Not Present 00:09:43.448 FDP Configuration: Valid 00:09:43.448 Vendor Specific Size: 0 00:09:43.448 Number of Reclaim Groups: 2 00:09:43.448 Number of Recalim Unit Handles: 8 00:09:43.448 Max Placement Identifiers: 128 00:09:43.448 Number of Namespaces Suppprted: 256 00:09:43.448 Reclaim unit Nominal Size: 6000000 bytes 00:09:43.448 Estimated Reclaim Unit Time Limit: Not Reported 00:09:43.448 RUH Desc #000: RUH Type: Initially Isolated 00:09:43.448 RUH Desc #001: RUH Type: Initially Isolated 00:09:43.448 RUH Desc #002: RUH Type: Initially Isolated 00:09:43.448 RUH Desc #003: RUH Type: Initially Isolated 00:09:43.448 RUH Desc #004: RUH Type: Initially Isolated 00:09:43.448 RUH Desc #005: RUH Type: Initially Isolated 00:09:43.448 RUH Desc #006: RUH Type: Initially Isolated 00:09:43.448 RUH Desc #007: RUH Type: Initially Isolated 00:09:43.448 00:09:43.448 FDP reclaim unit handle usage log page 00:09:43.448 ====================================== 00:09:43.448 Number of Reclaim Unit Handles: 8 00:09:43.448 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:43.448 RUH Usage Desc #001: RUH Attributes: Unused 00:09:43.448 RUH Usage Desc #002: RUH Attributes: Unused 00:09:43.448 RUH Usage Desc #003: RUH Attributes: Unused 00:09:43.448 RUH Usage Desc #004: RUH Attributes: Unused 00:09:43.448 RUH Usage Desc #005: RUH Attributes: Unused 00:09:43.448 RUH Usage Desc #006: RUH Attributes: Unused 00:09:43.448 RUH Usage Desc #007: RUH Attributes: Unused 00:09:43.448 00:09:43.448 FDP statistics log page 00:09:43.448 ======================= 00:09:43.448 Host bytes with metadata written: 2021363712 00:09:43.448 Media bytes with metadata written: 2022051840 00:09:43.448 Media bytes erased: 0 00:09:43.448 00:09:43.448 FDP Reclaim unit handle status 00:09:43.448 ============================== 00:09:43.448 Number of RUHS descriptors: 2 00:09:43.448 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005847 00:09:43.448 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:43.448 00:09:43.448 FDP write on placement id: 0 success 00:09:43.448 00:09:43.448 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:43.448 00:09:43.448 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:43.448 00:09:43.448 Get Feature: FDP Events for Placement handle: #0 00:09:43.448 ======================== 00:09:43.448 Number of FDP Events: 6 00:09:43.448 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:43.448 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:43.448 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:43.448 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:43.448 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:43.448 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:43.448 00:09:43.448 FDP events log page 00:09:43.448 =================== 00:09:43.448 Number of FDP events: 1 00:09:43.448 FDP Event #0: 00:09:43.448 Event Type: RU Not Written to Capacity 00:09:43.448 Placement Identifier: Valid 00:09:43.448 NSID: Valid 00:09:43.448 Location: Valid 00:09:43.448 Placement Identifier: 0 00:09:43.448 Event Timestamp: 2 00:09:43.448 Namespace Identifier: 1 00:09:43.448 Reclaim Group Identifier: 0 00:09:43.448 Reclaim Unit Handle Identifier: 0 00:09:43.448 00:09:43.448 FDP test passed 00:09:43.448 00:09:43.448 real 0m0.235s 00:09:43.448 user 0m0.070s 00:09:43.448 sys 0m0.064s 00:09:43.448 00:55:06 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.448 00:55:06 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:43.448 ************************************ 00:09:43.448 END TEST nvme_flexible_data_placement 00:09:43.448 ************************************ 00:09:43.448 ************************************ 00:09:43.448 END TEST nvme_fdp 00:09:43.448 ************************************ 00:09:43.449 00:09:43.449 real 0m7.470s 00:09:43.449 user 0m1.021s 00:09:43.449 sys 0m1.382s 00:09:43.449 00:55:06 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.449 00:55:06 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:43.449 00:55:06 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:43.449 00:55:06 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:43.449 00:55:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.449 00:55:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.449 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:09:43.449 ************************************ 00:09:43.449 START TEST nvme_rpc 00:09:43.449 ************************************ 00:09:43.449 00:55:06 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:43.449 * Looking for test storage... 00:09:43.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:43.449 00:55:06 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.449 00:55:06 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.449 00:55:06 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.707 00:55:06 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.707 --rc genhtml_branch_coverage=1 00:09:43.707 --rc genhtml_function_coverage=1 00:09:43.707 --rc genhtml_legend=1 00:09:43.707 --rc geninfo_all_blocks=1 00:09:43.707 --rc geninfo_unexecuted_blocks=1 00:09:43.707 00:09:43.707 ' 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.707 --rc genhtml_branch_coverage=1 00:09:43.707 --rc genhtml_function_coverage=1 00:09:43.707 --rc genhtml_legend=1 00:09:43.707 --rc geninfo_all_blocks=1 00:09:43.707 --rc geninfo_unexecuted_blocks=1 00:09:43.707 00:09:43.707 ' 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.707 --rc genhtml_branch_coverage=1 00:09:43.707 --rc genhtml_function_coverage=1 00:09:43.707 --rc genhtml_legend=1 00:09:43.707 --rc geninfo_all_blocks=1 00:09:43.707 --rc geninfo_unexecuted_blocks=1 00:09:43.707 00:09:43.707 ' 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.707 --rc genhtml_branch_coverage=1 00:09:43.707 --rc genhtml_function_coverage=1 00:09:43.707 --rc genhtml_legend=1 00:09:43.707 --rc geninfo_all_blocks=1 00:09:43.707 --rc geninfo_unexecuted_blocks=1 00:09:43.707 00:09:43.707 ' 00:09:43.707 00:55:06 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.707 00:55:06 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:43.707 00:55:06 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:43.707 00:55:06 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:43.707 00:55:06 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=79125 00:09:43.707 00:55:06 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:43.707 00:55:06 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 79125 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 79125 ']' 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.707 00:55:06 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.707 [2024-11-26 00:55:06.534639] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:43.707 [2024-11-26 00:55:06.534774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79125 ] 00:09:43.965 [2024-11-26 00:55:06.669891] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:43.965 [2024-11-26 00:55:06.698181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.965 [2024-11-26 00:55:06.718974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.965 [2024-11-26 00:55:06.718994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.530 00:55:07 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:44.530 00:55:07 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:44.530 00:55:07 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:44.788 Nvme0n1 00:09:44.788 00:55:07 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:44.788 00:55:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:45.046 request: 00:09:45.046 { 00:09:45.046 "bdev_name": "Nvme0n1", 00:09:45.046 "filename": "non_existing_file", 00:09:45.046 "method": "bdev_nvme_apply_firmware", 00:09:45.046 "req_id": 1 00:09:45.046 } 00:09:45.046 Got JSON-RPC error response 00:09:45.047 response: 00:09:45.047 { 00:09:45.047 "code": -32603, 00:09:45.047 "message": "open file failed." 00:09:45.047 } 00:09:45.047 00:55:07 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:45.047 00:55:07 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:45.047 00:55:07 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:45.305 00:55:08 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:45.305 00:55:08 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 79125 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 79125 ']' 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 79125 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79125 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79125' 00:09:45.305 killing process with pid 79125 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@973 -- # kill 79125 00:09:45.305 00:55:08 nvme_rpc -- common/autotest_common.sh@978 -- # wait 79125 00:09:45.563 00:09:45.563 real 0m2.109s 00:09:45.563 user 0m4.075s 00:09:45.563 sys 0m0.505s 00:09:45.563 00:55:08 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.563 00:55:08 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.563 ************************************ 00:09:45.563 END TEST nvme_rpc 00:09:45.563 ************************************ 00:09:45.563 00:55:08 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:45.563 00:55:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.563 00:55:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.563 00:55:08 -- common/autotest_common.sh@10 -- # set +x 00:09:45.563 ************************************ 00:09:45.563 START TEST nvme_rpc_timeouts 00:09:45.563 ************************************ 00:09:45.563 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:45.563 * Looking for test storage... 00:09:45.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.563 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:45.563 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:45.563 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:45.820 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:45.820 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.821 00:55:08 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.821 --rc genhtml_branch_coverage=1 00:09:45.821 --rc genhtml_function_coverage=1 00:09:45.821 --rc genhtml_legend=1 00:09:45.821 --rc geninfo_all_blocks=1 00:09:45.821 --rc geninfo_unexecuted_blocks=1 00:09:45.821 00:09:45.821 ' 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.821 --rc genhtml_branch_coverage=1 00:09:45.821 --rc genhtml_function_coverage=1 00:09:45.821 --rc genhtml_legend=1 00:09:45.821 --rc geninfo_all_blocks=1 00:09:45.821 --rc geninfo_unexecuted_blocks=1 00:09:45.821 00:09:45.821 ' 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.821 --rc genhtml_branch_coverage=1 00:09:45.821 --rc genhtml_function_coverage=1 00:09:45.821 --rc genhtml_legend=1 00:09:45.821 --rc geninfo_all_blocks=1 00:09:45.821 --rc geninfo_unexecuted_blocks=1 00:09:45.821 00:09:45.821 ' 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.821 --rc genhtml_branch_coverage=1 00:09:45.821 --rc genhtml_function_coverage=1 00:09:45.821 --rc genhtml_legend=1 00:09:45.821 --rc geninfo_all_blocks=1 00:09:45.821 --rc geninfo_unexecuted_blocks=1 00:09:45.821 00:09:45.821 ' 00:09:45.821 00:55:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.821 00:55:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79179 00:09:45.821 00:55:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79179 00:09:45.821 00:55:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79211 00:09:45.821 00:55:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:45.821 00:55:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79211 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 79211 ']' 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.821 00:55:08 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:45.821 00:55:08 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:45.821 [2024-11-26 00:55:08.612009] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:09:45.821 [2024-11-26 00:55:08.612136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79211 ] 00:09:46.078 [2024-11-26 00:55:08.747455] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:46.078 [2024-11-26 00:55:08.770923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:46.078 [2024-11-26 00:55:08.794823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.078 [2024-11-26 00:55:08.794896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.708 00:55:09 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.708 Checking default timeout settings: 00:09:46.708 00:55:09 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:46.708 00:55:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:46.708 00:55:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:46.992 Making settings changes with rpc: 00:09:46.992 00:55:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:46.992 00:55:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:47.249 Check default vs. modified settings: 00:09:47.249 00:55:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:47.249 00:55:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79179 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79179 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:47.507 Setting action_on_timeout is changed as expected. 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79179 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.507 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79179 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:47.508 Setting timeout_us is changed as expected. 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79179 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79179 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:47.508 Setting timeout_admin_us is changed as expected. 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79179 /tmp/settings_modified_79179 00:09:47.508 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79211 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 79211 ']' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 79211 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79211 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.508 killing process with pid 79211 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79211' 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 79211 00:09:47.508 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 79211 00:09:47.766 RPC TIMEOUT SETTING TEST PASSED. 00:09:47.766 00:55:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:47.766 ************************************ 00:09:47.766 END TEST nvme_rpc_timeouts 00:09:47.766 ************************************ 00:09:47.766 00:09:47.766 real 0m2.281s 00:09:47.766 user 0m4.575s 00:09:47.766 sys 0m0.502s 00:09:47.766 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.766 00:55:10 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:48.025 00:55:10 -- spdk/autotest.sh@239 -- # uname -s 00:09:48.025 00:55:10 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:48.025 00:55:10 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:48.025 00:55:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.025 00:55:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.025 00:55:10 -- common/autotest_common.sh@10 -- # set +x 00:09:48.025 ************************************ 00:09:48.025 START TEST sw_hotplug 00:09:48.025 ************************************ 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:48.025 * Looking for test storage... 00:09:48.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.025 00:55:10 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:48.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.025 --rc genhtml_branch_coverage=1 00:09:48.025 --rc genhtml_function_coverage=1 00:09:48.025 --rc genhtml_legend=1 00:09:48.025 --rc geninfo_all_blocks=1 00:09:48.025 --rc geninfo_unexecuted_blocks=1 00:09:48.025 00:09:48.025 ' 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:48.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.025 --rc genhtml_branch_coverage=1 00:09:48.025 --rc genhtml_function_coverage=1 00:09:48.025 --rc genhtml_legend=1 00:09:48.025 --rc geninfo_all_blocks=1 00:09:48.025 --rc geninfo_unexecuted_blocks=1 00:09:48.025 00:09:48.025 ' 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:48.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.025 --rc genhtml_branch_coverage=1 00:09:48.025 --rc genhtml_function_coverage=1 00:09:48.025 --rc genhtml_legend=1 00:09:48.025 --rc geninfo_all_blocks=1 00:09:48.025 --rc geninfo_unexecuted_blocks=1 00:09:48.025 00:09:48.025 ' 00:09:48.025 00:55:10 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:48.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.025 --rc genhtml_branch_coverage=1 00:09:48.025 --rc genhtml_function_coverage=1 00:09:48.025 --rc genhtml_legend=1 00:09:48.025 --rc geninfo_all_blocks=1 00:09:48.025 --rc geninfo_unexecuted_blocks=1 00:09:48.025 00:09:48.025 ' 00:09:48.025 00:55:10 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:48.283 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.542 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:48.542 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:48.542 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:48.542 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:48.542 00:55:11 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:48.542 00:55:11 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:48.542 00:55:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:48.542 00:55:11 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:48.542 00:55:11 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:48.543 00:55:11 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:48.543 00:55:11 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:48.543 00:55:11 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:48.543 00:55:11 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:48.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:49.063 Waiting for block devices as requested 00:09:49.063 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.064 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.064 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.322 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.590 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:54.590 00:55:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:54.590 00:55:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:54.590 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:54.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:54.848 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:54.848 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:55.107 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.107 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:55.366 00:55:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=80057 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:55.366 00:55:18 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:55.366 00:55:18 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:55.366 00:55:18 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:55.366 00:55:18 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:55.366 00:55:18 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:55.366 00:55:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:55.624 Initializing NVMe Controllers 00:09:55.624 Attaching to 0000:00:10.0 00:09:55.624 Attaching to 0000:00:11.0 00:09:55.624 Attached to 0000:00:11.0 00:09:55.624 Attached to 0000:00:10.0 00:09:55.624 Initialization complete. Starting I/O... 00:09:55.624 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:55.624 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:55.624 00:09:56.557 QEMU NVMe Ctrl (12341 ): 3371 I/Os completed (+3371) 00:09:56.557 QEMU NVMe Ctrl (12340 ): 3422 I/Os completed (+3422) 00:09:56.557 00:09:57.488 QEMU NVMe Ctrl (12341 ): 9357 I/Os completed (+5986) 00:09:57.488 QEMU NVMe Ctrl (12340 ): 9686 I/Os completed (+6264) 00:09:57.488 00:09:58.420 QEMU NVMe Ctrl (12341 ): 15546 I/Os completed (+6189) 00:09:58.420 QEMU NVMe Ctrl (12340 ): 17033 I/Os completed (+7347) 00:09:58.420 00:09:59.794 QEMU NVMe Ctrl (12341 ): 22628 I/Os completed (+7082) 00:09:59.794 QEMU NVMe Ctrl (12340 ): 24965 I/Os completed (+7932) 00:09:59.794 00:10:00.727 QEMU NVMe Ctrl (12341 ): 27732 I/Os completed (+5104) 00:10:00.727 QEMU NVMe Ctrl (12340 ): 30842 I/Os completed (+5877) 00:10:00.727 00:10:01.293 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:01.293 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:01.293 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:01.293 [2024-11-26 00:55:24.128693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:01.293 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:01.293 [2024-11-26 00:55:24.130045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.130215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.130241] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.130254] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:01.293 [2024-11-26 00:55:24.131437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.131475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.131490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.131502] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:01.293 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:01.293 [2024-11-26 00:55:24.153027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:01.293 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:01.293 [2024-11-26 00:55:24.154135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.154175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.154191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.154207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:01.293 [2024-11-26 00:55:24.155543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.155582] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.155597] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 [2024-11-26 00:55:24.155611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:01.293 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:01.293 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:01.293 EAL: Scan for (pci) bus failed. 00:10:01.293 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:01.551 Attaching to 0000:00:10.0 00:10:01.551 Attached to 0000:00:10.0 00:10:01.551 QEMU NVMe Ctrl (12340 ): 49 I/Os completed (+49) 00:10:01.551 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:01.551 00:55:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:01.551 Attaching to 0000:00:11.0 00:10:01.551 Attached to 0000:00:11.0 00:10:02.485 QEMU NVMe Ctrl (12340 ): 6590 I/Os completed (+6541) 00:10:02.485 QEMU NVMe Ctrl (12341 ): 5786 I/Os completed (+5786) 00:10:02.485 00:10:03.418 QEMU NVMe Ctrl (12340 ): 10724 I/Os completed (+4134) 00:10:03.418 QEMU NVMe Ctrl (12341 ): 9815 I/Os completed (+4029) 00:10:03.418 00:10:04.797 QEMU NVMe Ctrl (12340 ): 14780 I/Os completed (+4056) 00:10:04.797 QEMU NVMe Ctrl (12341 ): 13868 I/Os completed (+4053) 00:10:04.797 00:10:05.732 QEMU NVMe Ctrl (12340 ): 20105 I/Os completed (+5325) 00:10:05.732 QEMU NVMe Ctrl (12341 ): 19143 I/Os completed (+5275) 00:10:05.732 00:10:06.676 QEMU NVMe Ctrl (12340 ): 24623 I/Os completed (+4518) 00:10:06.676 QEMU NVMe Ctrl (12341 ): 23320 I/Os completed (+4177) 00:10:06.676 00:10:07.620 QEMU NVMe Ctrl (12340 ): 29035 I/Os completed (+4412) 00:10:07.620 QEMU NVMe Ctrl (12341 ): 27733 I/Os completed (+4413) 00:10:07.620 00:10:08.606 QEMU NVMe Ctrl (12340 ): 33394 I/Os completed (+4359) 00:10:08.606 QEMU NVMe Ctrl (12341 ): 32110 I/Os completed (+4377) 00:10:08.606 00:10:09.550 QEMU NVMe Ctrl (12340 ): 37413 I/Os completed (+4019) 00:10:09.550 QEMU NVMe Ctrl (12341 ): 36142 I/Os completed (+4032) 00:10:09.550 00:10:10.494 QEMU NVMe Ctrl (12340 ): 41636 I/Os completed (+4223) 00:10:10.494 QEMU NVMe Ctrl (12341 ): 40381 I/Os completed (+4239) 00:10:10.494 00:10:11.440 QEMU NVMe Ctrl (12340 ): 44981 I/Os completed (+3345) 00:10:11.440 QEMU NVMe Ctrl (12341 ): 43732 I/Os completed (+3351) 00:10:11.440 00:10:12.827 QEMU NVMe Ctrl (12340 ): 49235 I/Os completed (+4254) 00:10:12.827 QEMU NVMe Ctrl (12341 ): 47996 I/Os completed (+4264) 00:10:12.827 00:10:13.763 QEMU NVMe Ctrl (12340 ): 53628 I/Os completed (+4393) 00:10:13.763 QEMU NVMe Ctrl (12341 ): 52353 I/Os completed (+4357) 00:10:13.763 00:10:13.763 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:13.763 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.764 [2024-11-26 00:55:36.383312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:13.764 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:13.764 [2024-11-26 00:55:36.384350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.384466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.384500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.384588] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:13.764 [2024-11-26 00:55:36.385745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.385838] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.385868] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.385881] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:13.764 [2024-11-26 00:55:36.405242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:13.764 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:13.764 [2024-11-26 00:55:36.406152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.406309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.406329] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.406344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:13.764 [2024-11-26 00:55:36.407231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.407262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.407273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 [2024-11-26 00:55:36.407285] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:13.764 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:13.764 EAL: Scan for (pci) bus failed. 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:13.764 Attaching to 0000:00:10.0 00:10:13.764 Attached to 0000:00:10.0 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:13.764 00:55:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:13.764 Attaching to 0000:00:11.0 00:10:13.764 Attached to 0000:00:11.0 00:10:14.697 QEMU NVMe Ctrl (12340 ): 4940 I/Os completed (+4940) 00:10:14.697 QEMU NVMe Ctrl (12341 ): 4488 I/Os completed (+4488) 00:10:14.697 00:10:15.630 QEMU NVMe Ctrl (12340 ): 12480 I/Os completed (+7540) 00:10:15.630 QEMU NVMe Ctrl (12341 ): 12330 I/Os completed (+7842) 00:10:15.630 00:10:16.563 QEMU NVMe Ctrl (12340 ): 18555 I/Os completed (+6075) 00:10:16.563 QEMU NVMe Ctrl (12341 ): 18641 I/Os completed (+6311) 00:10:16.563 00:10:17.498 QEMU NVMe Ctrl (12340 ): 22644 I/Os completed (+4089) 00:10:17.498 QEMU NVMe Ctrl (12341 ): 22574 I/Os completed (+3933) 00:10:17.498 00:10:18.456 QEMU NVMe Ctrl (12340 ): 26863 I/Os completed (+4219) 00:10:18.456 QEMU NVMe Ctrl (12341 ): 26469 I/Os completed (+3895) 00:10:18.456 00:10:19.830 QEMU NVMe Ctrl (12340 ): 31088 I/Os completed (+4225) 00:10:19.830 QEMU NVMe Ctrl (12341 ): 30429 I/Os completed (+3960) 00:10:19.830 00:10:20.765 QEMU NVMe Ctrl (12340 ): 35005 I/Os completed (+3917) 00:10:20.765 QEMU NVMe Ctrl (12341 ): 34306 I/Os completed (+3877) 00:10:20.765 00:10:21.698 QEMU NVMe Ctrl (12340 ): 38928 I/Os completed (+3923) 00:10:21.698 QEMU NVMe Ctrl (12341 ): 38202 I/Os completed (+3896) 00:10:21.698 00:10:22.633 QEMU NVMe Ctrl (12340 ): 45797 I/Os completed (+6869) 00:10:22.633 QEMU NVMe Ctrl (12341 ): 45678 I/Os completed (+7476) 00:10:22.633 00:10:23.568 QEMU NVMe Ctrl (12340 ): 50666 I/Os completed (+4869) 00:10:23.568 QEMU NVMe Ctrl (12341 ): 50325 I/Os completed (+4647) 00:10:23.568 00:10:24.501 QEMU NVMe Ctrl (12340 ): 54530 I/Os completed (+3864) 00:10:24.501 QEMU NVMe Ctrl (12341 ): 54203 I/Os completed (+3878) 00:10:24.501 00:10:25.433 QEMU NVMe Ctrl (12340 ): 60416 I/Os completed (+5886) 00:10:25.433 QEMU NVMe Ctrl (12341 ): 60282 I/Os completed (+6079) 00:10:25.433 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:26.000 [2024-11-26 00:55:48.663108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:26.000 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:26.000 [2024-11-26 00:55:48.664423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.664552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.664595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.664663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:26.000 [2024-11-26 00:55:48.666236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.666362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.666454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.666485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:26.000 [2024-11-26 00:55:48.687794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:26.000 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:26.000 [2024-11-26 00:55:48.688801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.688898] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.688932] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.689024] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:26.000 [2024-11-26 00:55:48.690204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.690320] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.690358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 [2024-11-26 00:55:48.690384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:26.000 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:26.000 EAL: Scan for (pci) bus failed. 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:26.000 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:26.000 Attaching to 0000:00:10.0 00:10:26.000 Attached to 0000:00:10.0 00:10:26.258 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:26.258 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.258 00:55:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:26.258 Attaching to 0000:00:11.0 00:10:26.258 Attached to 0000:00:11.0 00:10:26.258 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:26.258 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:26.259 [2024-11-26 00:55:48.943615] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:38.544 00:56:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:38.545 00:56:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.545 00:56:00 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.82 00:10:38.545 00:56:00 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.82 00:10:38.545 00:56:00 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:38.545 00:56:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.82 00:10:38.545 00:56:00 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.82 2 00:10:38.545 remove_attach_helper took 42.82s to complete (handling 2 nvme drive(s)) 00:56:00 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:45.107 00:56:06 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 80057 00:10:45.107 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (80057) - No such process 00:10:45.107 00:56:06 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 80057 00:10:45.107 00:56:06 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:45.107 00:56:06 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:45.107 00:56:06 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:45.107 00:56:06 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80612 00:10:45.107 00:56:06 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:45.107 00:56:06 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:45.107 00:56:06 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80612 00:10:45.107 00:56:06 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 80612 ']' 00:10:45.107 00:56:06 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.107 00:56:06 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.107 00:56:06 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.107 00:56:06 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.107 00:56:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.107 [2024-11-26 00:56:07.027619] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:10:45.107 [2024-11-26 00:56:07.027734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80612 ] 00:10:45.107 [2024-11-26 00:56:07.160273] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:45.107 [2024-11-26 00:56:07.189179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.107 [2024-11-26 00:56:07.213127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:45.107 00:56:07 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.107 00:56:07 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:45.107 00:56:07 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:45.107 00:56:07 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:45.107 00:56:07 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:45.107 00:56:07 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:45.107 00:56:07 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:45.107 00:56:07 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:45.107 00:56:07 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:45.107 00:56:07 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.668 00:56:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.668 00:56:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.668 00:56:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:51.668 00:56:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.668 [2024-11-26 00:56:13.963779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.668 [2024-11-26 00:56:13.964946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.668 [2024-11-26 00:56:13.964983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.668 [2024-11-26 00:56:13.964996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.668 [2024-11-26 00:56:13.965013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.668 [2024-11-26 00:56:13.965021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.668 [2024-11-26 00:56:13.965032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.668 [2024-11-26 00:56:13.965039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.668 [2024-11-26 00:56:13.965047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.668 [2024-11-26 00:56:13.965054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.668 [2024-11-26 00:56:13.965062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.668 [2024-11-26 00:56:13.965068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.668 [2024-11-26 00:56:13.965077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.668 [2024-11-26 00:56:14.363773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:51.668 [2024-11-26 00:56:14.364915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.668 [2024-11-26 00:56:14.364945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.668 [2024-11-26 00:56:14.364957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.668 [2024-11-26 00:56:14.364970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.668 [2024-11-26 00:56:14.364979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.668 [2024-11-26 00:56:14.364986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.668 [2024-11-26 00:56:14.364995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.668 [2024-11-26 00:56:14.365002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.668 [2024-11-26 00:56:14.365012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.668 [2024-11-26 00:56:14.365019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.668 [2024-11-26 00:56:14.365028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.668 [2024-11-26 00:56:14.365034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.668 00:56:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.668 00:56:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.668 00:56:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.668 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.926 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.926 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.926 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.926 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.926 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.926 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.926 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.926 00:56:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.141 00:56:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.141 00:56:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.141 00:56:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.141 00:56:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.141 00:56:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.141 00:56:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:04.141 00:56:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:04.141 [2024-11-26 00:56:26.863985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:04.141 [2024-11-26 00:56:26.865289] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.141 [2024-11-26 00:56:26.865396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.141 [2024-11-26 00:56:26.865453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.141 [2024-11-26 00:56:26.865488] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.141 [2024-11-26 00:56:26.865505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.141 [2024-11-26 00:56:26.865530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.141 [2024-11-26 00:56:26.865554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.141 [2024-11-26 00:56:26.865573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.141 [2024-11-26 00:56:26.865887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.141 [2024-11-26 00:56:26.865924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.141 [2024-11-26 00:56:26.865942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.141 [2024-11-26 00:56:26.866008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.402 [2024-11-26 00:56:27.263975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:04.402 [2024-11-26 00:56:27.265146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.402 [2024-11-26 00:56:27.265245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.402 [2024-11-26 00:56:27.265308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.402 [2024-11-26 00:56:27.265335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.402 [2024-11-26 00:56:27.265354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.402 [2024-11-26 00:56:27.265377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.402 [2024-11-26 00:56:27.265401] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.402 [2024-11-26 00:56:27.265418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.402 [2024-11-26 00:56:27.265473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.402 [2024-11-26 00:56:27.265686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.402 [2024-11-26 00:56:27.265696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:04.402 [2024-11-26 00:56:27.265704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.664 00:56:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.664 00:56:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.664 00:56:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.664 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.924 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.924 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.924 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.924 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.924 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.924 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.924 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.924 00:56:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.156 00:56:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.156 00:56:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.156 00:56:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:17.156 [2024-11-26 00:56:39.764195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:17.156 [2024-11-26 00:56:39.765503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.156 [2024-11-26 00:56:39.765566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.156 [2024-11-26 00:56:39.765602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.156 [2024-11-26 00:56:39.765644] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.156 [2024-11-26 00:56:39.765663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.156 [2024-11-26 00:56:39.765688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.156 [2024-11-26 00:56:39.765766] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.156 [2024-11-26 00:56:39.765787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.156 [2024-11-26 00:56:39.765811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.156 [2024-11-26 00:56:39.765923] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.156 [2024-11-26 00:56:39.765951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.156 [2024-11-26 00:56:39.765978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.156 00:56:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.156 00:56:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.156 00:56:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:17.156 00:56:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:17.417 [2024-11-26 00:56:40.164206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:17.417 [2024-11-26 00:56:40.165500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.417 [2024-11-26 00:56:40.165605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.417 [2024-11-26 00:56:40.165671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.417 [2024-11-26 00:56:40.165702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.417 [2024-11-26 00:56:40.165794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.417 [2024-11-26 00:56:40.165821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.417 [2024-11-26 00:56:40.165890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.417 [2024-11-26 00:56:40.165911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.417 [2024-11-26 00:56:40.165938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.417 [2024-11-26 00:56:40.165966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.417 [2024-11-26 00:56:40.165990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.417 [2024-11-26 00:56:40.166062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.417 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:17.417 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.417 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.417 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.417 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.417 00:56:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.417 00:56:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.417 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.417 00:56:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.417 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:17.417 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.678 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.678 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.678 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.678 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:17.678 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.678 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.678 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.678 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.678 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.938 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.938 00:56:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.77 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.77 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.77 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.77 2 00:11:30.169 remove_attach_helper took 44.77s to complete (handling 2 nvme drive(s)) 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:30.169 00:56:52 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:30.169 00:56:52 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.844 00:56:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.844 00:56:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.844 00:56:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.844 00:56:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.844 [2024-11-26 00:56:58.763933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.844 [2024-11-26 00:56:58.765103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.844 [2024-11-26 00:56:58.765138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.844 [2024-11-26 00:56:58.765151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.844 [2024-11-26 00:56:58.765166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.845 [2024-11-26 00:56:58.765173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.845 [2024-11-26 00:56:58.765182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.845 [2024-11-26 00:56:58.765189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.845 [2024-11-26 00:56:58.765200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.845 [2024-11-26 00:56:58.765207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.845 [2024-11-26 00:56:58.765215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.845 [2024-11-26 00:56:58.765221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.845 [2024-11-26 00:56:58.765229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.845 [2024-11-26 00:56:59.163921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.845 [2024-11-26 00:56:59.164965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.845 [2024-11-26 00:56:59.164995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.845 [2024-11-26 00:56:59.165007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.845 [2024-11-26 00:56:59.165017] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.845 [2024-11-26 00:56:59.165026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.845 [2024-11-26 00:56:59.165033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.845 [2024-11-26 00:56:59.165042] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.845 [2024-11-26 00:56:59.165049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.845 [2024-11-26 00:56:59.165058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.845 [2024-11-26 00:56:59.165064] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.845 [2024-11-26 00:56:59.165075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.845 [2024-11-26 00:56:59.165081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.845 00:56:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.845 00:56:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.845 00:56:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.845 00:56:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.081 00:57:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.081 00:57:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.081 00:57:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.081 [2024-11-26 00:57:11.664143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:49.081 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.081 [2024-11-26 00:57:11.665100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.081 [2024-11-26 00:57:11.665162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.081 [2024-11-26 00:57:11.665201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.081 [2024-11-26 00:57:11.665237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.081 [2024-11-26 00:57:11.665256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.081 [2024-11-26 00:57:11.665283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.081 [2024-11-26 00:57:11.665308] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.082 [2024-11-26 00:57:11.665328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.082 [2024-11-26 00:57:11.665352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.082 [2024-11-26 00:57:11.665377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.082 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.082 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.082 [2024-11-26 00:57:11.665397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.082 [2024-11-26 00:57:11.665414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.082 00:57:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.082 00:57:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.082 00:57:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.082 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:49.082 00:57:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:49.344 [2024-11-26 00:57:12.164138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:49.344 [2024-11-26 00:57:12.167281] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.344 [2024-11-26 00:57:12.167387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.344 [2024-11-26 00:57:12.167451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.344 [2024-11-26 00:57:12.167480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.344 [2024-11-26 00:57:12.167499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.344 [2024-11-26 00:57:12.167522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.344 [2024-11-26 00:57:12.167547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.344 [2024-11-26 00:57:12.167602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.344 [2024-11-26 00:57:12.167631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.344 [2024-11-26 00:57:12.167654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.344 [2024-11-26 00:57:12.167671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.344 [2024-11-26 00:57:12.167727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.344 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:49.344 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.344 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.344 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.344 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.344 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.344 00:57:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.344 00:57:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.344 00:57:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.344 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:49.344 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:49.604 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.604 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.604 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:49.604 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:49.604 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.604 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:49.604 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:49.604 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:49.891 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:49.891 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:49.891 00:57:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.168 00:57:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 00:57:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 00:57:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.168 00:57:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.168 00:57:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.168 00:57:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:02.168 00:57:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:02.168 [2024-11-26 00:57:24.664352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:02.168 [2024-11-26 00:57:24.667374] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.168 [2024-11-26 00:57:24.667485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.168 [2024-11-26 00:57:24.667543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.168 [2024-11-26 00:57:24.667581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.168 [2024-11-26 00:57:24.667600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.168 [2024-11-26 00:57:24.667625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.168 [2024-11-26 00:57:24.667649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.168 [2024-11-26 00:57:24.667668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.168 [2024-11-26 00:57:24.667747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.168 [2024-11-26 00:57:24.667775] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.168 [2024-11-26 00:57:24.667792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.168 [2024-11-26 00:57:24.667817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.431 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:02.431 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.431 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.431 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.431 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.431 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.431 00:57:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.431 00:57:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.431 00:57:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.431 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:02.431 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:02.693 [2024-11-26 00:57:25.364353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:02.693 [2024-11-26 00:57:25.365106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.693 [2024-11-26 00:57:25.365133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.693 [2024-11-26 00:57:25.365145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.693 [2024-11-26 00:57:25.365155] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.693 [2024-11-26 00:57:25.365165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.693 [2024-11-26 00:57:25.365172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.693 [2024-11-26 00:57:25.365184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.693 [2024-11-26 00:57:25.365191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.693 [2024-11-26 00:57:25.365199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.693 [2024-11-26 00:57:25.365205] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.693 [2024-11-26 00:57:25.365213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.693 [2024-11-26 00:57:25.365220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.955 00:57:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.955 00:57:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.955 00:57:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:02.955 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:03.217 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:03.217 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:03.217 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:03.217 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:03.217 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:03.217 00:57:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:03.217 00:57:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:03.217 00:57:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:15.454 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:15.454 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:15.455 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:15.455 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.455 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.455 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.455 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:15.455 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.36 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.36 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:15.455 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.36 00:12:15.455 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.36 2 00:12:15.455 remove_attach_helper took 45.36s to complete (handling 2 nvme drive(s)) 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:15.455 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80612 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 80612 ']' 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 80612 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80612 00:12:15.455 killing process with pid 80612 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80612' 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@973 -- # kill 80612 00:12:15.455 00:57:38 sw_hotplug -- common/autotest_common.sh@978 -- # wait 80612 00:12:15.716 00:57:38 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:15.978 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:16.239 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.239 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:16.499 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.499 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:16.499 ************************************ 00:12:16.499 00:12:16.499 real 2m28.588s 00:12:16.499 user 1m50.659s 00:12:16.499 sys 0m16.677s 00:12:16.499 00:57:39 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:16.499 00:57:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.499 END TEST sw_hotplug 00:12:16.499 ************************************ 00:12:16.499 00:57:39 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:16.499 00:57:39 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.499 00:57:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:16.499 00:57:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.499 00:57:39 -- common/autotest_common.sh@10 -- # set +x 00:12:16.499 ************************************ 00:12:16.499 START TEST nvme_xnvme 00:12:16.499 ************************************ 00:12:16.499 00:57:39 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:16.762 * Looking for test storage... 00:12:16.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.762 00:57:39 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:16.762 00:57:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:16.762 00:57:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:16.762 00:57:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:16.762 00:57:39 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:16.763 00:57:39 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:16.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.763 --rc genhtml_branch_coverage=1 00:12:16.763 --rc genhtml_function_coverage=1 00:12:16.763 --rc genhtml_legend=1 00:12:16.763 --rc geninfo_all_blocks=1 00:12:16.763 --rc geninfo_unexecuted_blocks=1 00:12:16.763 00:12:16.763 ' 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:16.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.763 --rc genhtml_branch_coverage=1 00:12:16.763 --rc genhtml_function_coverage=1 00:12:16.763 --rc genhtml_legend=1 00:12:16.763 --rc geninfo_all_blocks=1 00:12:16.763 --rc geninfo_unexecuted_blocks=1 00:12:16.763 00:12:16.763 ' 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:16.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.763 --rc genhtml_branch_coverage=1 00:12:16.763 --rc genhtml_function_coverage=1 00:12:16.763 --rc genhtml_legend=1 00:12:16.763 --rc geninfo_all_blocks=1 00:12:16.763 --rc geninfo_unexecuted_blocks=1 00:12:16.763 00:12:16.763 ' 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:16.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:16.763 --rc genhtml_branch_coverage=1 00:12:16.763 --rc genhtml_function_coverage=1 00:12:16.763 --rc genhtml_legend=1 00:12:16.763 --rc geninfo_all_blocks=1 00:12:16.763 --rc geninfo_unexecuted_blocks=1 00:12:16.763 00:12:16.763 ' 00:12:16.763 00:57:39 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:16.763 00:57:39 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:16.763 00:57:39 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:16.763 00:57:39 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:16.764 00:57:39 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:16.764 00:57:39 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:16.764 00:57:39 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:16.764 00:57:39 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:16.764 00:57:39 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:16.764 00:57:39 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:16.764 00:57:39 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:16.764 00:57:39 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:16.764 00:57:39 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:16.764 #define SPDK_CONFIG_H 00:12:16.764 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:16.764 #define SPDK_CONFIG_APPS 1 00:12:16.764 #define SPDK_CONFIG_ARCH native 00:12:16.764 #define SPDK_CONFIG_ASAN 1 00:12:16.764 #undef SPDK_CONFIG_AVAHI 00:12:16.764 #undef SPDK_CONFIG_CET 00:12:16.764 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:16.764 #define SPDK_CONFIG_COVERAGE 1 00:12:16.764 #define SPDK_CONFIG_CROSS_PREFIX 00:12:16.764 #undef SPDK_CONFIG_CRYPTO 00:12:16.764 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:16.764 #undef SPDK_CONFIG_CUSTOMOCF 00:12:16.764 #undef SPDK_CONFIG_DAOS 00:12:16.764 #define SPDK_CONFIG_DAOS_DIR 00:12:16.764 #define SPDK_CONFIG_DEBUG 1 00:12:16.764 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:16.764 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:16.764 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:16.764 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:16.764 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:16.764 #undef SPDK_CONFIG_DPDK_UADK 00:12:16.764 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:16.764 #define SPDK_CONFIG_EXAMPLES 1 00:12:16.764 #undef SPDK_CONFIG_FC 00:12:16.764 #define SPDK_CONFIG_FC_PATH 00:12:16.764 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:16.764 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:16.764 #define SPDK_CONFIG_FSDEV 1 00:12:16.764 #undef SPDK_CONFIG_FUSE 00:12:16.764 #undef SPDK_CONFIG_FUZZER 00:12:16.764 #define SPDK_CONFIG_FUZZER_LIB 00:12:16.764 #undef SPDK_CONFIG_GOLANG 00:12:16.764 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:16.764 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:16.764 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:16.764 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:16.764 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:16.764 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:16.764 #undef SPDK_CONFIG_HAVE_LZ4 00:12:16.764 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:16.764 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:16.764 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:16.764 #define SPDK_CONFIG_IDXD 1 00:12:16.764 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:16.764 #undef SPDK_CONFIG_IPSEC_MB 00:12:16.764 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:16.764 #define SPDK_CONFIG_ISAL 1 00:12:16.764 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:16.764 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:16.764 #define SPDK_CONFIG_LIBDIR 00:12:16.764 #undef SPDK_CONFIG_LTO 00:12:16.764 #define SPDK_CONFIG_MAX_LCORES 128 00:12:16.764 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:16.764 #define SPDK_CONFIG_NVME_CUSE 1 00:12:16.764 #undef SPDK_CONFIG_OCF 00:12:16.764 #define SPDK_CONFIG_OCF_PATH 00:12:16.764 #define SPDK_CONFIG_OPENSSL_PATH 00:12:16.764 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:16.764 #define SPDK_CONFIG_PGO_DIR 00:12:16.764 #undef SPDK_CONFIG_PGO_USE 00:12:16.764 #define SPDK_CONFIG_PREFIX /usr/local 00:12:16.764 #undef SPDK_CONFIG_RAID5F 00:12:16.764 #undef SPDK_CONFIG_RBD 00:12:16.764 #define SPDK_CONFIG_RDMA 1 00:12:16.764 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:16.764 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:16.764 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:16.764 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:16.764 #define SPDK_CONFIG_SHARED 1 00:12:16.764 #undef SPDK_CONFIG_SMA 00:12:16.764 #define SPDK_CONFIG_TESTS 1 00:12:16.764 #undef SPDK_CONFIG_TSAN 00:12:16.764 #define SPDK_CONFIG_UBLK 1 00:12:16.764 #define SPDK_CONFIG_UBSAN 1 00:12:16.764 #undef SPDK_CONFIG_UNIT_TESTS 00:12:16.764 #undef SPDK_CONFIG_URING 00:12:16.764 #define SPDK_CONFIG_URING_PATH 00:12:16.764 #undef SPDK_CONFIG_URING_ZNS 00:12:16.764 #undef SPDK_CONFIG_USDT 00:12:16.764 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:16.764 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:16.764 #undef SPDK_CONFIG_VFIO_USER 00:12:16.764 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:16.764 #define SPDK_CONFIG_VHOST 1 00:12:16.764 #define SPDK_CONFIG_VIRTIO 1 00:12:16.764 #undef SPDK_CONFIG_VTUNE 00:12:16.764 #define SPDK_CONFIG_VTUNE_DIR 00:12:16.764 #define SPDK_CONFIG_WERROR 1 00:12:16.764 #define SPDK_CONFIG_WPDK_DIR 00:12:16.764 #define SPDK_CONFIG_XNVME 1 00:12:16.764 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:16.764 00:57:39 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:16.764 00:57:39 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:16.764 00:57:39 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:16.764 00:57:39 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:16.764 00:57:39 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:16.764 00:57:39 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:16.764 00:57:39 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.764 00:57:39 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.764 00:57:39 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.764 00:57:39 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:16.764 00:57:39 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:16.764 00:57:39 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:16.764 00:57:39 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:16.764 00:57:39 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:16.764 00:57:39 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:16.764 00:57:39 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@140 -- # : main 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.765 00:57:39 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 81951 ]] 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 81951 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.3tHf8c 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.3tHf8c/tests/xnvme /tmp/spdk.3tHf8c 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13245431808 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6340247552 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261960704 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13245431808 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6340247552 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265217024 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=172032 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.766 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97349545984 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2353233920 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:16.767 * Looking for test storage... 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13245431808 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:16.767 00:57:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:17.029 00:57:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.029 00:57:39 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.030 00:57:39 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.030 00:57:39 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:17.030 00:57:39 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.030 00:57:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:17.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.030 --rc genhtml_branch_coverage=1 00:12:17.030 --rc genhtml_function_coverage=1 00:12:17.030 --rc genhtml_legend=1 00:12:17.030 --rc geninfo_all_blocks=1 00:12:17.030 --rc geninfo_unexecuted_blocks=1 00:12:17.030 00:12:17.030 ' 00:12:17.030 00:57:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:17.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.030 --rc genhtml_branch_coverage=1 00:12:17.030 --rc genhtml_function_coverage=1 00:12:17.030 --rc genhtml_legend=1 00:12:17.030 --rc geninfo_all_blocks=1 00:12:17.030 --rc geninfo_unexecuted_blocks=1 00:12:17.030 00:12:17.030 ' 00:12:17.030 00:57:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:17.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.030 --rc genhtml_branch_coverage=1 00:12:17.030 --rc genhtml_function_coverage=1 00:12:17.030 --rc genhtml_legend=1 00:12:17.030 --rc geninfo_all_blocks=1 00:12:17.030 --rc geninfo_unexecuted_blocks=1 00:12:17.030 00:12:17.030 ' 00:12:17.030 00:57:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:17.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.030 --rc genhtml_branch_coverage=1 00:12:17.030 --rc genhtml_function_coverage=1 00:12:17.030 --rc genhtml_legend=1 00:12:17.030 --rc geninfo_all_blocks=1 00:12:17.030 --rc geninfo_unexecuted_blocks=1 00:12:17.030 00:12:17.030 ' 00:12:17.030 00:57:39 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.030 00:57:39 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.030 00:57:39 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.030 00:57:39 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.030 00:57:39 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.030 00:57:39 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.030 00:57:39 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.030 00:57:39 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.030 00:57:39 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:17.030 00:57:39 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:17.030 00:57:39 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:17.292 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:17.292 Waiting for block devices as requested 00:12:17.554 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.554 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.554 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:17.815 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.110 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:23.110 00:57:45 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:23.110 00:57:46 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:23.110 00:57:46 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:23.371 00:57:46 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:23.371 00:57:46 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:23.371 00:57:46 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:23.371 00:57:46 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:23.371 00:57:46 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:23.371 No valid GPT data, bailing 00:12:23.631 00:57:46 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:23.631 00:57:46 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:23.631 00:57:46 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:23.631 00:57:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:23.631 00:57:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.631 00:57:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.631 00:57:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.631 ************************************ 00:12:23.631 START TEST xnvme_rpc 00:12:23.631 ************************************ 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82351 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82351 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82351 ']' 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:23.631 00:57:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.631 [2024-11-26 00:57:46.429048] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:23.631 [2024-11-26 00:57:46.429190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82351 ] 00:12:23.892 [2024-11-26 00:57:46.567464] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:23.892 [2024-11-26 00:57:46.597610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.892 [2024-11-26 00:57:46.639531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.466 xnvme_bdev 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.466 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82351 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82351 ']' 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82351 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82351 00:12:24.728 killing process with pid 82351 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82351' 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82351 00:12:24.728 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82351 00:12:25.302 ************************************ 00:12:25.302 END TEST xnvme_rpc 00:12:25.302 ************************************ 00:12:25.302 00:12:25.302 real 0m1.595s 00:12:25.302 user 0m1.550s 00:12:25.302 sys 0m0.499s 00:12:25.302 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.302 00:57:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.302 00:57:47 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:25.302 00:57:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:25.302 00:57:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:25.302 00:57:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:25.302 ************************************ 00:12:25.302 START TEST xnvme_bdevperf 00:12:25.302 ************************************ 00:12:25.302 00:57:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:25.302 00:57:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:25.302 00:57:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:25.302 00:57:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:25.302 00:57:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:25.302 00:57:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:25.302 00:57:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:25.302 00:57:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:25.302 { 00:12:25.302 "subsystems": [ 00:12:25.302 { 00:12:25.302 "subsystem": "bdev", 00:12:25.302 "config": [ 00:12:25.302 { 00:12:25.302 "params": { 00:12:25.302 "io_mechanism": "libaio", 00:12:25.302 "conserve_cpu": false, 00:12:25.302 "filename": "/dev/nvme0n1", 00:12:25.302 "name": "xnvme_bdev" 00:12:25.302 }, 00:12:25.302 "method": "bdev_xnvme_create" 00:12:25.302 }, 00:12:25.302 { 00:12:25.302 "method": "bdev_wait_for_examine" 00:12:25.302 } 00:12:25.302 ] 00:12:25.302 } 00:12:25.302 ] 00:12:25.302 } 00:12:25.302 [2024-11-26 00:57:48.074654] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:25.302 [2024-11-26 00:57:48.074785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82409 ] 00:12:25.302 [2024-11-26 00:57:48.211457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:25.564 [2024-11-26 00:57:48.241561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.564 [2024-11-26 00:57:48.281068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.564 Running I/O for 5 seconds... 00:12:27.531 29261.00 IOPS, 114.30 MiB/s [2024-11-26T00:57:51.462Z] 26584.00 IOPS, 103.84 MiB/s [2024-11-26T00:57:52.845Z] 25488.33 IOPS, 99.56 MiB/s [2024-11-26T00:57:53.786Z] 25025.25 IOPS, 97.75 MiB/s [2024-11-26T00:57:53.786Z] 24512.20 IOPS, 95.75 MiB/s 00:12:30.869 Latency(us) 00:12:30.869 [2024-11-26T00:57:53.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.869 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:30.869 xnvme_bdev : 5.01 24490.34 95.67 0.00 0.00 2607.64 500.97 7461.02 00:12:30.869 [2024-11-26T00:57:53.786Z] =================================================================================================================== 00:12:30.869 [2024-11-26T00:57:53.786Z] Total : 24490.34 95.67 0.00 0.00 2607.64 500.97 7461.02 00:12:30.869 00:57:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:30.869 00:57:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:30.869 00:57:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:30.869 00:57:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:30.869 00:57:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:30.869 { 00:12:30.869 "subsystems": [ 00:12:30.869 { 00:12:30.869 "subsystem": "bdev", 00:12:30.870 "config": [ 00:12:30.870 { 00:12:30.870 "params": { 00:12:30.870 "io_mechanism": "libaio", 00:12:30.870 "conserve_cpu": false, 00:12:30.870 "filename": "/dev/nvme0n1", 00:12:30.870 "name": "xnvme_bdev" 00:12:30.870 }, 00:12:30.870 "method": "bdev_xnvme_create" 00:12:30.870 }, 00:12:30.870 { 00:12:30.870 "method": "bdev_wait_for_examine" 00:12:30.870 } 00:12:30.870 ] 00:12:30.870 } 00:12:30.870 ] 00:12:30.870 } 00:12:31.131 [2024-11-26 00:57:53.788306] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:31.131 [2024-11-26 00:57:53.788434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82473 ] 00:12:31.131 [2024-11-26 00:57:53.928401] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:31.131 [2024-11-26 00:57:53.953103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.131 [2024-11-26 00:57:53.992111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.393 Running I/O for 5 seconds... 00:12:33.280 28956.00 IOPS, 113.11 MiB/s [2024-11-26T00:57:57.583Z] 29077.50 IOPS, 113.58 MiB/s [2024-11-26T00:57:58.527Z] 29749.67 IOPS, 116.21 MiB/s [2024-11-26T00:57:59.470Z] 30025.25 IOPS, 117.29 MiB/s [2024-11-26T00:57:59.470Z] 30182.80 IOPS, 117.90 MiB/s 00:12:36.553 Latency(us) 00:12:36.553 [2024-11-26T00:57:59.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.553 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:36.553 xnvme_bdev : 5.01 30154.11 117.79 0.00 0.00 2118.26 456.86 8267.62 00:12:36.553 [2024-11-26T00:57:59.470Z] =================================================================================================================== 00:12:36.553 [2024-11-26T00:57:59.470Z] Total : 30154.11 117.79 0.00 0.00 2118.26 456.86 8267.62 00:12:36.553 ************************************ 00:12:36.553 END TEST xnvme_bdevperf 00:12:36.553 ************************************ 00:12:36.553 00:12:36.553 real 0m11.446s 00:12:36.553 user 0m3.100s 00:12:36.553 sys 0m6.861s 00:12:36.553 00:57:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.553 00:57:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:36.814 00:57:59 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:36.814 00:57:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:36.814 00:57:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.814 00:57:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:36.814 ************************************ 00:12:36.814 START TEST xnvme_fio_plugin 00:12:36.814 ************************************ 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:36.814 00:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:36.814 { 00:12:36.814 "subsystems": [ 00:12:36.814 { 00:12:36.814 "subsystem": "bdev", 00:12:36.814 "config": [ 00:12:36.814 { 00:12:36.814 "params": { 00:12:36.814 "io_mechanism": "libaio", 00:12:36.814 "conserve_cpu": false, 00:12:36.814 "filename": "/dev/nvme0n1", 00:12:36.814 "name": "xnvme_bdev" 00:12:36.814 }, 00:12:36.814 "method": "bdev_xnvme_create" 00:12:36.814 }, 00:12:36.814 { 00:12:36.814 "method": "bdev_wait_for_examine" 00:12:36.814 } 00:12:36.814 ] 00:12:36.814 } 00:12:36.814 ] 00:12:36.814 } 00:12:36.814 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:36.814 fio-3.35 00:12:36.814 Starting 1 thread 00:12:43.406 00:12:43.406 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82587: Tue Nov 26 00:58:05 2024 00:12:43.406 read: IOPS=31.4k, BW=123MiB/s (129MB/s)(613MiB/5001msec) 00:12:43.406 slat (usec): min=3, max=1891, avg=23.07, stdev=104.24 00:12:43.406 clat (usec): min=101, max=6210, avg=1416.16, stdev=534.68 00:12:43.406 lat (usec): min=186, max=6214, avg=1439.23, stdev=523.57 00:12:43.406 clat percentiles (usec): 00:12:43.406 | 1.00th=[ 281], 5.00th=[ 553], 10.00th=[ 725], 20.00th=[ 988], 00:12:43.406 | 30.00th=[ 1156], 40.00th=[ 1287], 50.00th=[ 1418], 60.00th=[ 1532], 00:12:43.406 | 70.00th=[ 1663], 80.00th=[ 1811], 90.00th=[ 2040], 95.00th=[ 2245], 00:12:43.406 | 99.00th=[ 2933], 99.50th=[ 3326], 99.90th=[ 4080], 99.95th=[ 4424], 00:12:43.406 | 99.99th=[ 4948] 00:12:43.406 bw ( KiB/s): min=118248, max=133672, per=99.99%, avg=125523.56, stdev=4971.91, samples=9 00:12:43.406 iops : min=29562, max=33418, avg=31380.89, stdev=1242.98, samples=9 00:12:43.406 lat (usec) : 250=0.68%, 500=3.36%, 750=6.79%, 1000=9.79% 00:12:43.406 lat (msec) : 2=68.23%, 4=11.05%, 10=0.12% 00:12:43.406 cpu : usr=39.88%, sys=51.62%, ctx=42, majf=0, minf=773 00:12:43.406 IO depths : 1=0.5%, 2=1.1%, 4=2.9%, 8=8.1%, 16=23.0%, 32=62.2%, >=64=2.1% 00:12:43.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.406 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:43.406 issued rwts: total=156952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:43.406 00:12:43.406 Run status group 0 (all jobs): 00:12:43.406 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=613MiB (643MB), run=5001-5001msec 00:12:43.406 ----------------------------------------------------- 00:12:43.406 Suppressions used: 00:12:43.406 count bytes template 00:12:43.406 1 11 /usr/src/fio/parse.c 00:12:43.406 1 8 libtcmalloc_minimal.so 00:12:43.406 1 904 libcrypto.so 00:12:43.406 ----------------------------------------------------- 00:12:43.406 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:43.406 00:58:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:43.406 { 00:12:43.406 "subsystems": [ 00:12:43.406 { 00:12:43.406 "subsystem": "bdev", 00:12:43.406 "config": [ 00:12:43.406 { 00:12:43.406 "params": { 00:12:43.407 "io_mechanism": "libaio", 00:12:43.407 "conserve_cpu": false, 00:12:43.407 "filename": "/dev/nvme0n1", 00:12:43.407 "name": "xnvme_bdev" 00:12:43.407 }, 00:12:43.407 "method": "bdev_xnvme_create" 00:12:43.407 }, 00:12:43.407 { 00:12:43.407 "method": "bdev_wait_for_examine" 00:12:43.407 } 00:12:43.407 ] 00:12:43.407 } 00:12:43.407 ] 00:12:43.407 } 00:12:43.407 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:43.407 fio-3.35 00:12:43.407 Starting 1 thread 00:12:48.697 00:12:48.697 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82679: Tue Nov 26 00:58:11 2024 00:12:48.697 write: IOPS=31.1k, BW=121MiB/s (127MB/s)(607MiB/5001msec); 0 zone resets 00:12:48.697 slat (usec): min=4, max=1811, avg=24.83, stdev=95.41 00:12:48.697 clat (usec): min=106, max=5102, avg=1375.27, stdev=583.49 00:12:48.697 lat (usec): min=178, max=5283, avg=1400.10, stdev=575.52 00:12:48.697 clat percentiles (usec): 00:12:48.697 | 1.00th=[ 277], 5.00th=[ 490], 10.00th=[ 668], 20.00th=[ 898], 00:12:48.697 | 30.00th=[ 1057], 40.00th=[ 1205], 50.00th=[ 1352], 60.00th=[ 1483], 00:12:48.697 | 70.00th=[ 1631], 80.00th=[ 1795], 90.00th=[ 2073], 95.00th=[ 2376], 00:12:48.697 | 99.00th=[ 3163], 99.50th=[ 3490], 99.90th=[ 4080], 99.95th=[ 4228], 00:12:48.697 | 99.99th=[ 4555] 00:12:48.697 bw ( KiB/s): min=111664, max=136248, per=99.72%, avg=123996.44, stdev=7347.56, samples=9 00:12:48.697 iops : min=27916, max=34062, avg=30999.11, stdev=1836.89, samples=9 00:12:48.697 lat (usec) : 250=0.73%, 500=4.50%, 750=7.99%, 1000=12.94% 00:12:48.697 lat (msec) : 2=62.06%, 4=11.64%, 10=0.13% 00:12:48.697 cpu : usr=35.30%, sys=54.18%, ctx=15, majf=0, minf=773 00:12:48.697 IO depths : 1=0.4%, 2=1.0%, 4=3.0%, 8=8.4%, 16=23.3%, 32=61.8%, >=64=2.1% 00:12:48.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.697 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:48.697 issued rwts: total=0,155459,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:48.697 00:12:48.697 Run status group 0 (all jobs): 00:12:48.697 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=607MiB (637MB), run=5001-5001msec 00:12:48.957 ----------------------------------------------------- 00:12:48.957 Suppressions used: 00:12:48.957 count bytes template 00:12:48.957 1 11 /usr/src/fio/parse.c 00:12:48.957 1 8 libtcmalloc_minimal.so 00:12:48.957 1 904 libcrypto.so 00:12:48.957 ----------------------------------------------------- 00:12:48.957 00:12:49.217 00:12:49.218 real 0m12.381s 00:12:49.218 user 0m5.024s 00:12:49.218 sys 0m5.997s 00:12:49.218 ************************************ 00:12:49.218 END TEST xnvme_fio_plugin 00:12:49.218 ************************************ 00:12:49.218 00:58:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.218 00:58:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:49.218 00:58:11 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:49.218 00:58:11 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:49.218 00:58:11 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:49.218 00:58:11 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:49.218 00:58:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:49.218 00:58:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.218 00:58:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.218 ************************************ 00:12:49.218 START TEST xnvme_rpc 00:12:49.218 ************************************ 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:49.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82754 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82754 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82754 ']' 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.218 00:58:11 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:49.218 [2024-11-26 00:58:12.072981] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:49.218 [2024-11-26 00:58:12.073385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82754 ] 00:12:49.479 [2024-11-26 00:58:12.213318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:49.479 [2024-11-26 00:58:12.242335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.479 [2024-11-26 00:58:12.283627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.052 xnvme_bdev 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:50.052 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:50.313 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:50.313 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:50.313 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.313 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.313 00:58:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.313 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:50.313 00:58:12 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:50.313 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82754 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82754 ']' 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82754 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82754 00:12:50.314 killing process with pid 82754 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82754' 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82754 00:12:50.314 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82754 00:12:50.887 00:12:50.887 real 0m1.626s 00:12:50.888 user 0m1.564s 00:12:50.888 sys 0m0.532s 00:12:50.888 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.888 00:58:13 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.888 ************************************ 00:12:50.888 END TEST xnvme_rpc 00:12:50.888 ************************************ 00:12:50.888 00:58:13 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:50.888 00:58:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:50.888 00:58:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.888 00:58:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:50.888 ************************************ 00:12:50.888 START TEST xnvme_bdevperf 00:12:50.888 ************************************ 00:12:50.888 00:58:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:50.888 00:58:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:50.888 00:58:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:50.888 00:58:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:50.888 00:58:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:50.888 00:58:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:50.888 00:58:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:50.888 00:58:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:50.888 { 00:12:50.888 "subsystems": [ 00:12:50.888 { 00:12:50.888 "subsystem": "bdev", 00:12:50.888 "config": [ 00:12:50.888 { 00:12:50.888 "params": { 00:12:50.888 "io_mechanism": "libaio", 00:12:50.888 "conserve_cpu": true, 00:12:50.888 "filename": "/dev/nvme0n1", 00:12:50.888 "name": "xnvme_bdev" 00:12:50.888 }, 00:12:50.888 "method": "bdev_xnvme_create" 00:12:50.888 }, 00:12:50.888 { 00:12:50.888 "method": "bdev_wait_for_examine" 00:12:50.888 } 00:12:50.888 ] 00:12:50.888 } 00:12:50.888 ] 00:12:50.888 } 00:12:50.888 [2024-11-26 00:58:13.749903] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:50.888 [2024-11-26 00:58:13.750271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82817 ] 00:12:51.150 [2024-11-26 00:58:13.887689] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.150 [2024-11-26 00:58:13.917704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.150 [2024-11-26 00:58:13.956732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.411 Running I/O for 5 seconds... 00:12:53.299 28674.00 IOPS, 112.01 MiB/s [2024-11-26T00:58:17.161Z] 28006.50 IOPS, 109.40 MiB/s [2024-11-26T00:58:18.546Z] 27261.33 IOPS, 106.49 MiB/s [2024-11-26T00:58:19.121Z] 27607.75 IOPS, 107.84 MiB/s 00:12:56.204 Latency(us) 00:12:56.204 [2024-11-26T00:58:19.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.204 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:56.204 xnvme_bdev : 5.00 27519.71 107.50 0.00 0.00 2320.83 258.36 11494.01 00:12:56.204 [2024-11-26T00:58:19.121Z] =================================================================================================================== 00:12:56.204 [2024-11-26T00:58:19.121Z] Total : 27519.71 107.50 0.00 0.00 2320.83 258.36 11494.01 00:12:56.797 00:58:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:56.797 00:58:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:56.797 00:58:19 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:56.797 00:58:19 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:56.797 00:58:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:56.797 { 00:12:56.797 "subsystems": [ 00:12:56.797 { 00:12:56.797 "subsystem": "bdev", 00:12:56.797 "config": [ 00:12:56.797 { 00:12:56.797 "params": { 00:12:56.797 "io_mechanism": "libaio", 00:12:56.797 "conserve_cpu": true, 00:12:56.797 "filename": "/dev/nvme0n1", 00:12:56.797 "name": "xnvme_bdev" 00:12:56.797 }, 00:12:56.797 "method": "bdev_xnvme_create" 00:12:56.797 }, 00:12:56.797 { 00:12:56.797 "method": "bdev_wait_for_examine" 00:12:56.797 } 00:12:56.797 ] 00:12:56.797 } 00:12:56.797 ] 00:12:56.797 } 00:12:56.797 [2024-11-26 00:58:19.470349] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:12:56.797 [2024-11-26 00:58:19.470501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82881 ] 00:12:56.797 [2024-11-26 00:58:19.608672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:56.797 [2024-11-26 00:58:19.637824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.797 [2024-11-26 00:58:19.677496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.072 Running I/O for 5 seconds... 00:12:58.987 30224.00 IOPS, 118.06 MiB/s [2024-11-26T00:58:23.288Z] 30157.50 IOPS, 117.80 MiB/s [2024-11-26T00:58:23.857Z] 30617.67 IOPS, 119.60 MiB/s [2024-11-26T00:58:25.242Z] 30565.00 IOPS, 119.39 MiB/s 00:13:02.325 Latency(us) 00:13:02.325 [2024-11-26T00:58:25.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.325 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:02.325 xnvme_bdev : 5.00 30599.59 119.53 0.00 0.00 2086.47 182.74 9225.45 00:13:02.325 [2024-11-26T00:58:25.242Z] =================================================================================================================== 00:13:02.325 [2024-11-26T00:58:25.242Z] Total : 30599.59 119.53 0.00 0.00 2086.47 182.74 9225.45 00:13:02.325 00:13:02.325 real 0m11.442s 00:13:02.325 user 0m3.213s 00:13:02.325 sys 0m6.707s 00:13:02.325 ************************************ 00:13:02.325 END TEST xnvme_bdevperf 00:13:02.325 ************************************ 00:13:02.325 00:58:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.325 00:58:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:02.325 00:58:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:02.325 00:58:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:02.325 00:58:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.325 00:58:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.325 ************************************ 00:13:02.325 START TEST xnvme_fio_plugin 00:13:02.325 ************************************ 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:02.325 00:58:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:02.325 { 00:13:02.325 "subsystems": [ 00:13:02.325 { 00:13:02.325 "subsystem": "bdev", 00:13:02.325 "config": [ 00:13:02.325 { 00:13:02.325 "params": { 00:13:02.325 "io_mechanism": "libaio", 00:13:02.325 "conserve_cpu": true, 00:13:02.325 "filename": "/dev/nvme0n1", 00:13:02.325 "name": "xnvme_bdev" 00:13:02.325 }, 00:13:02.325 "method": "bdev_xnvme_create" 00:13:02.325 }, 00:13:02.325 { 00:13:02.325 "method": "bdev_wait_for_examine" 00:13:02.325 } 00:13:02.325 ] 00:13:02.325 } 00:13:02.325 ] 00:13:02.325 } 00:13:02.583 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:02.583 fio-3.35 00:13:02.583 Starting 1 thread 00:13:09.171 00:13:09.171 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82995: Tue Nov 26 00:58:30 2024 00:13:09.171 read: IOPS=34.1k, BW=133MiB/s (140MB/s)(666MiB/5001msec) 00:13:09.171 slat (usec): min=3, max=1834, avg=22.21, stdev=93.10 00:13:09.171 clat (usec): min=105, max=4648, avg=1274.90, stdev=583.97 00:13:09.171 lat (usec): min=159, max=4736, avg=1297.11, stdev=577.54 00:13:09.171 clat percentiles (usec): 00:13:09.171 | 1.00th=[ 237], 5.00th=[ 404], 10.00th=[ 553], 20.00th=[ 742], 00:13:09.171 | 30.00th=[ 914], 40.00th=[ 1090], 50.00th=[ 1254], 60.00th=[ 1418], 00:13:09.171 | 70.00th=[ 1565], 80.00th=[ 1729], 90.00th=[ 1991], 95.00th=[ 2245], 00:13:09.171 | 99.00th=[ 2966], 99.50th=[ 3294], 99.90th=[ 3818], 99.95th=[ 3949], 00:13:09.171 | 99.99th=[ 4178] 00:13:09.171 bw ( KiB/s): min=118184, max=168720, per=97.46%, avg=132817.78, stdev=17971.68, samples=9 00:13:09.171 iops : min=29546, max=42180, avg=33204.44, stdev=4492.92, samples=9 00:13:09.171 lat (usec) : 250=1.26%, 500=6.68%, 750=12.68%, 1000=14.27% 00:13:09.171 lat (msec) : 2=55.20%, 4=9.89%, 10=0.03% 00:13:09.171 cpu : usr=37.64%, sys=53.32%, ctx=16, majf=0, minf=773 00:13:09.171 IO depths : 1=0.4%, 2=1.0%, 4=2.8%, 8=8.1%, 16=23.2%, 32=62.3%, >=64=2.1% 00:13:09.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.171 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:09.171 issued rwts: total=170377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.171 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:09.171 00:13:09.171 Run status group 0 (all jobs): 00:13:09.171 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=666MiB (698MB), run=5001-5001msec 00:13:09.171 ----------------------------------------------------- 00:13:09.171 Suppressions used: 00:13:09.171 count bytes template 00:13:09.171 1 11 /usr/src/fio/parse.c 00:13:09.171 1 8 libtcmalloc_minimal.so 00:13:09.172 1 904 libcrypto.so 00:13:09.172 ----------------------------------------------------- 00:13:09.172 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:09.172 00:58:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:09.172 { 00:13:09.172 "subsystems": [ 00:13:09.172 { 00:13:09.172 "subsystem": "bdev", 00:13:09.172 "config": [ 00:13:09.172 { 00:13:09.172 "params": { 00:13:09.172 "io_mechanism": "libaio", 00:13:09.172 "conserve_cpu": true, 00:13:09.172 "filename": "/dev/nvme0n1", 00:13:09.172 "name": "xnvme_bdev" 00:13:09.172 }, 00:13:09.172 "method": "bdev_xnvme_create" 00:13:09.172 }, 00:13:09.172 { 00:13:09.172 "method": "bdev_wait_for_examine" 00:13:09.172 } 00:13:09.172 ] 00:13:09.172 } 00:13:09.172 ] 00:13:09.172 } 00:13:09.172 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:09.172 fio-3.35 00:13:09.172 Starting 1 thread 00:13:14.464 00:13:14.464 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83081: Tue Nov 26 00:58:37 2024 00:13:14.464 write: IOPS=43.0k, BW=168MiB/s (176MB/s)(839MiB/5001msec); 0 zone resets 00:13:14.464 slat (usec): min=3, max=1638, avg=18.38, stdev=63.09 00:13:14.464 clat (usec): min=84, max=8951, avg=989.43, stdev=512.63 00:13:14.464 lat (usec): min=140, max=8987, avg=1007.81, stdev=510.58 00:13:14.464 clat percentiles (usec): 00:13:14.464 | 1.00th=[ 210], 5.00th=[ 318], 10.00th=[ 416], 20.00th=[ 578], 00:13:14.464 | 30.00th=[ 701], 40.00th=[ 807], 50.00th=[ 914], 60.00th=[ 1029], 00:13:14.464 | 70.00th=[ 1156], 80.00th=[ 1336], 90.00th=[ 1631], 95.00th=[ 1926], 00:13:14.464 | 99.00th=[ 2606], 99.50th=[ 2900], 99.90th=[ 3621], 99.95th=[ 4015], 00:13:14.464 | 99.99th=[ 8717] 00:13:14.464 bw ( KiB/s): min=143608, max=199216, per=100.00%, avg=172334.22, stdev=18444.72, samples=9 00:13:14.464 iops : min=35902, max=49804, avg=43083.56, stdev=4611.18, samples=9 00:13:14.464 lat (usec) : 100=0.01%, 250=2.13%, 500=12.68%, 750=19.50%, 1000=23.57% 00:13:14.464 lat (msec) : 2=37.91%, 4=4.15%, 10=0.05% 00:13:14.464 cpu : usr=35.08%, sys=53.28%, ctx=16, majf=0, minf=773 00:13:14.464 IO depths : 1=0.2%, 2=0.9%, 4=3.1%, 8=9.5%, 16=24.6%, 32=59.7%, >=64=2.0% 00:13:14.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:14.464 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:14.464 issued rwts: total=0,214850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:14.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:14.464 00:13:14.464 Run status group 0 (all jobs): 00:13:14.464 WRITE: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=839MiB (880MB), run=5001-5001msec 00:13:14.726 ----------------------------------------------------- 00:13:14.726 Suppressions used: 00:13:14.726 count bytes template 00:13:14.726 1 11 /usr/src/fio/parse.c 00:13:14.726 1 8 libtcmalloc_minimal.so 00:13:14.726 1 904 libcrypto.so 00:13:14.726 ----------------------------------------------------- 00:13:14.726 00:13:14.726 00:13:14.726 real 0m12.357s 00:13:14.726 user 0m4.903s 00:13:14.726 sys 0m6.016s 00:13:14.726 00:58:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.726 00:58:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:14.726 ************************************ 00:13:14.726 END TEST xnvme_fio_plugin 00:13:14.726 ************************************ 00:13:14.726 00:58:37 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:14.726 00:58:37 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:14.726 00:58:37 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:14.726 00:58:37 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:14.726 00:58:37 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:14.726 00:58:37 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:14.726 00:58:37 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:14.726 00:58:37 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:14.726 00:58:37 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:14.726 00:58:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.726 00:58:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.726 00:58:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.726 ************************************ 00:13:14.726 START TEST xnvme_rpc 00:13:14.726 ************************************ 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:14.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83156 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83156 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83156 ']' 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.726 00:58:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.988 [2024-11-26 00:58:37.718274] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:14.988 [2024-11-26 00:58:37.718702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83156 ] 00:13:14.988 [2024-11-26 00:58:37.856680] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:14.988 [2024-11-26 00:58:37.888394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.249 [2024-11-26 00:58:37.928549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.820 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.820 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:15.820 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:15.820 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.820 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.820 xnvme_bdev 00:13:15.820 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83156 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83156 ']' 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83156 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:15.821 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.082 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83156 00:13:16.082 killing process with pid 83156 00:13:16.082 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.082 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.082 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83156' 00:13:16.082 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83156 00:13:16.082 00:58:38 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83156 00:13:16.344 00:13:16.344 real 0m1.632s 00:13:16.344 user 0m1.603s 00:13:16.344 sys 0m0.516s 00:13:16.606 ************************************ 00:13:16.606 END TEST xnvme_rpc 00:13:16.606 ************************************ 00:13:16.606 00:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.606 00:58:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.606 00:58:39 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:16.606 00:58:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:16.606 00:58:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.606 00:58:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.606 ************************************ 00:13:16.606 START TEST xnvme_bdevperf 00:13:16.606 ************************************ 00:13:16.606 00:58:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:16.606 00:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:16.606 00:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:16.606 00:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:16.606 00:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:16.606 00:58:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:16.606 00:58:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:16.606 00:58:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:16.606 { 00:13:16.606 "subsystems": [ 00:13:16.606 { 00:13:16.606 "subsystem": "bdev", 00:13:16.606 "config": [ 00:13:16.606 { 00:13:16.606 "params": { 00:13:16.606 "io_mechanism": "io_uring", 00:13:16.606 "conserve_cpu": false, 00:13:16.606 "filename": "/dev/nvme0n1", 00:13:16.606 "name": "xnvme_bdev" 00:13:16.606 }, 00:13:16.606 "method": "bdev_xnvme_create" 00:13:16.606 }, 00:13:16.606 { 00:13:16.606 "method": "bdev_wait_for_examine" 00:13:16.606 } 00:13:16.606 ] 00:13:16.606 } 00:13:16.606 ] 00:13:16.606 } 00:13:16.606 [2024-11-26 00:58:39.413042] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:16.606 [2024-11-26 00:58:39.413194] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83214 ] 00:13:16.869 [2024-11-26 00:58:39.553502] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:16.869 [2024-11-26 00:58:39.581649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.869 [2024-11-26 00:58:39.623205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.869 Running I/O for 5 seconds... 00:13:19.200 33688.00 IOPS, 131.59 MiB/s [2024-11-26T00:58:43.135Z] 34461.00 IOPS, 134.61 MiB/s [2024-11-26T00:58:44.099Z] 34980.33 IOPS, 136.64 MiB/s [2024-11-26T00:58:45.042Z] 34825.25 IOPS, 136.04 MiB/s [2024-11-26T00:58:45.042Z] 34848.80 IOPS, 136.13 MiB/s 00:13:22.125 Latency(us) 00:13:22.125 [2024-11-26T00:58:45.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.125 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:22.125 xnvme_bdev : 5.01 34804.67 135.96 0.00 0.00 1834.77 567.14 8318.03 00:13:22.125 [2024-11-26T00:58:45.042Z] =================================================================================================================== 00:13:22.125 [2024-11-26T00:58:45.042Z] Total : 34804.67 135.96 0.00 0.00 1834.77 567.14 8318.03 00:13:22.125 00:58:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:22.125 00:58:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:22.125 00:58:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:22.125 00:58:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:22.125 00:58:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:22.386 { 00:13:22.386 "subsystems": [ 00:13:22.386 { 00:13:22.386 "subsystem": "bdev", 00:13:22.386 "config": [ 00:13:22.386 { 00:13:22.386 "params": { 00:13:22.386 "io_mechanism": "io_uring", 00:13:22.387 "conserve_cpu": false, 00:13:22.387 "filename": "/dev/nvme0n1", 00:13:22.387 "name": "xnvme_bdev" 00:13:22.387 }, 00:13:22.387 "method": "bdev_xnvme_create" 00:13:22.387 }, 00:13:22.387 { 00:13:22.387 "method": "bdev_wait_for_examine" 00:13:22.387 } 00:13:22.387 ] 00:13:22.387 } 00:13:22.387 ] 00:13:22.387 } 00:13:22.387 [2024-11-26 00:58:45.099972] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:22.387 [2024-11-26 00:58:45.100292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83286 ] 00:13:22.387 [2024-11-26 00:58:45.237670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:22.387 [2024-11-26 00:58:45.266745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.647 [2024-11-26 00:58:45.307496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.647 Running I/O for 5 seconds... 00:13:24.980 36333.00 IOPS, 141.93 MiB/s [2024-11-26T00:58:48.468Z] 36138.00 IOPS, 141.16 MiB/s [2024-11-26T00:58:49.862Z] 36121.00 IOPS, 141.10 MiB/s [2024-11-26T00:58:50.904Z] 36265.00 IOPS, 141.66 MiB/s 00:13:27.987 Latency(us) 00:13:27.987 [2024-11-26T00:58:50.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.987 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:27.987 xnvme_bdev : 5.00 36317.70 141.87 0.00 0.00 1758.48 283.57 7208.96 00:13:27.987 [2024-11-26T00:58:50.904Z] =================================================================================================================== 00:13:27.987 [2024-11-26T00:58:50.904Z] Total : 36317.70 141.87 0.00 0.00 1758.48 283.57 7208.96 00:13:27.987 ************************************ 00:13:27.987 END TEST xnvme_bdevperf 00:13:27.987 ************************************ 00:13:27.987 00:13:27.987 real 0m11.391s 00:13:27.987 user 0m4.224s 00:13:27.987 sys 0m6.890s 00:13:27.987 00:58:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.987 00:58:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:27.988 00:58:50 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:27.988 00:58:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.988 00:58:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.988 00:58:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.988 ************************************ 00:13:27.988 START TEST xnvme_fio_plugin 00:13:27.988 ************************************ 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:27.988 00:58:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:27.988 { 00:13:27.988 "subsystems": [ 00:13:27.988 { 00:13:27.988 "subsystem": "bdev", 00:13:27.988 "config": [ 00:13:27.988 { 00:13:27.988 "params": { 00:13:27.988 "io_mechanism": "io_uring", 00:13:27.988 "conserve_cpu": false, 00:13:27.988 "filename": "/dev/nvme0n1", 00:13:27.988 "name": "xnvme_bdev" 00:13:27.988 }, 00:13:27.988 "method": "bdev_xnvme_create" 00:13:27.988 }, 00:13:27.988 { 00:13:27.988 "method": "bdev_wait_for_examine" 00:13:27.988 } 00:13:27.988 ] 00:13:27.988 } 00:13:27.988 ] 00:13:27.988 } 00:13:28.271 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:28.271 fio-3.35 00:13:28.271 Starting 1 thread 00:13:33.565 00:13:33.565 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83389: Tue Nov 26 00:58:56 2024 00:13:33.565 read: IOPS=35.1k, BW=137MiB/s (144MB/s)(685MiB/5001msec) 00:13:33.565 slat (nsec): min=2715, max=95867, avg=3297.74, stdev=1807.70 00:13:33.565 clat (usec): min=763, max=5150, avg=1691.26, stdev=305.62 00:13:33.565 lat (usec): min=766, max=5160, avg=1694.56, stdev=305.95 00:13:33.565 clat percentiles (usec): 00:13:33.565 | 1.00th=[ 1106], 5.00th=[ 1237], 10.00th=[ 1336], 20.00th=[ 1450], 00:13:33.565 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1745], 00:13:33.565 | 70.00th=[ 1811], 80.00th=[ 1909], 90.00th=[ 2073], 95.00th=[ 2212], 00:13:33.565 | 99.00th=[ 2540], 99.50th=[ 2704], 99.90th=[ 3130], 99.95th=[ 3523], 00:13:33.565 | 99.99th=[ 5080] 00:13:33.565 bw ( KiB/s): min=129024, max=151552, per=99.08%, avg=138921.78, stdev=9332.94, samples=9 00:13:33.565 iops : min=32256, max=37888, avg=34730.44, stdev=2333.24, samples=9 00:13:33.565 lat (usec) : 1000=0.27% 00:13:33.565 lat (msec) : 2=85.85%, 4=13.84%, 10=0.04% 00:13:33.565 cpu : usr=31.72%, sys=67.10%, ctx=21, majf=0, minf=771 00:13:33.565 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:33.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.565 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:33.565 issued rwts: total=175295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.565 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:33.565 00:13:33.565 Run status group 0 (all jobs): 00:13:33.565 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=685MiB (718MB), run=5001-5001msec 00:13:34.138 ----------------------------------------------------- 00:13:34.138 Suppressions used: 00:13:34.138 count bytes template 00:13:34.138 1 11 /usr/src/fio/parse.c 00:13:34.138 1 8 libtcmalloc_minimal.so 00:13:34.138 1 904 libcrypto.so 00:13:34.138 ----------------------------------------------------- 00:13:34.138 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:34.138 00:58:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:34.138 { 00:13:34.138 "subsystems": [ 00:13:34.138 { 00:13:34.138 "subsystem": "bdev", 00:13:34.138 "config": [ 00:13:34.138 { 00:13:34.138 "params": { 00:13:34.138 "io_mechanism": "io_uring", 00:13:34.138 "conserve_cpu": false, 00:13:34.138 "filename": "/dev/nvme0n1", 00:13:34.138 "name": "xnvme_bdev" 00:13:34.138 }, 00:13:34.138 "method": "bdev_xnvme_create" 00:13:34.138 }, 00:13:34.138 { 00:13:34.138 "method": "bdev_wait_for_examine" 00:13:34.138 } 00:13:34.138 ] 00:13:34.138 } 00:13:34.138 ] 00:13:34.138 } 00:13:34.138 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:34.138 fio-3.35 00:13:34.138 Starting 1 thread 00:13:40.724 00:13:40.724 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83475: Tue Nov 26 00:59:02 2024 00:13:40.724 write: IOPS=38.8k, BW=152MiB/s (159MB/s)(758MiB/5002msec); 0 zone resets 00:13:40.724 slat (usec): min=2, max=453, avg= 3.63, stdev= 2.21 00:13:40.724 clat (usec): min=153, max=9684, avg=1505.37, stdev=282.59 00:13:40.724 lat (usec): min=157, max=9687, avg=1509.00, stdev=282.77 00:13:40.724 clat percentiles (usec): 00:13:40.724 | 1.00th=[ 1057], 5.00th=[ 1156], 10.00th=[ 1221], 20.00th=[ 1303], 00:13:40.724 | 30.00th=[ 1352], 40.00th=[ 1418], 50.00th=[ 1467], 60.00th=[ 1532], 00:13:40.724 | 70.00th=[ 1598], 80.00th=[ 1696], 90.00th=[ 1827], 95.00th=[ 1958], 00:13:40.724 | 99.00th=[ 2311], 99.50th=[ 2474], 99.90th=[ 2966], 99.95th=[ 3720], 00:13:40.724 | 99.99th=[ 8160] 00:13:40.724 bw ( KiB/s): min=148296, max=158720, per=99.47%, avg=154339.56, stdev=3587.08, samples=9 00:13:40.724 iops : min=37074, max=39680, avg=38584.89, stdev=896.77, samples=9 00:13:40.724 lat (usec) : 250=0.01%, 500=0.01%, 750=0.04%, 1000=0.40% 00:13:40.724 lat (msec) : 2=95.54%, 4=3.97%, 10=0.04% 00:13:40.724 cpu : usr=32.51%, sys=65.87%, ctx=10, majf=0, minf=771 00:13:40.724 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:40.724 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.724 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:40.724 issued rwts: total=0,194035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.724 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:40.724 00:13:40.724 Run status group 0 (all jobs): 00:13:40.724 WRITE: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=758MiB (795MB), run=5002-5002msec 00:13:40.724 ----------------------------------------------------- 00:13:40.725 Suppressions used: 00:13:40.725 count bytes template 00:13:40.725 1 11 /usr/src/fio/parse.c 00:13:40.725 1 8 libtcmalloc_minimal.so 00:13:40.725 1 904 libcrypto.so 00:13:40.725 ----------------------------------------------------- 00:13:40.725 00:13:40.725 00:13:40.725 real 0m12.118s 00:13:40.725 user 0m4.430s 00:13:40.725 sys 0m7.228s 00:13:40.725 00:59:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.725 00:59:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:40.725 ************************************ 00:13:40.725 END TEST xnvme_fio_plugin 00:13:40.725 ************************************ 00:13:40.725 00:59:02 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:40.725 00:59:02 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:40.725 00:59:02 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:40.725 00:59:02 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:40.725 00:59:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.725 00:59:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.725 00:59:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.725 ************************************ 00:13:40.725 START TEST xnvme_rpc 00:13:40.725 ************************************ 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:40.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83556 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83556 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83556 ']' 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.725 00:59:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.725 [2024-11-26 00:59:03.079079] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:40.725 [2024-11-26 00:59:03.079221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83556 ] 00:13:40.725 [2024-11-26 00:59:03.217432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:40.725 [2024-11-26 00:59:03.246432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.725 [2024-11-26 00:59:03.286801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.295 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.295 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:41.295 00:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.296 xnvme_bdev 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.296 00:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83556 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83556 ']' 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83556 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83556 00:13:41.296 killing process with pid 83556 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83556' 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83556 00:13:41.296 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83556 00:13:41.866 00:13:41.866 real 0m1.631s 00:13:41.866 user 0m1.611s 00:13:41.866 sys 0m0.508s 00:13:41.866 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:41.866 00:59:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.866 ************************************ 00:13:41.866 END TEST xnvme_rpc 00:13:41.866 ************************************ 00:13:41.866 00:59:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:41.866 00:59:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:41.866 00:59:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:41.866 00:59:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:41.866 ************************************ 00:13:41.866 START TEST xnvme_bdevperf 00:13:41.866 ************************************ 00:13:41.866 00:59:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:41.866 00:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:41.866 00:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:41.866 00:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:41.866 00:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:41.866 00:59:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:41.866 00:59:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:41.866 00:59:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:41.866 { 00:13:41.866 "subsystems": [ 00:13:41.866 { 00:13:41.866 "subsystem": "bdev", 00:13:41.866 "config": [ 00:13:41.866 { 00:13:41.866 "params": { 00:13:41.866 "io_mechanism": "io_uring", 00:13:41.866 "conserve_cpu": true, 00:13:41.866 "filename": "/dev/nvme0n1", 00:13:41.866 "name": "xnvme_bdev" 00:13:41.866 }, 00:13:41.866 "method": "bdev_xnvme_create" 00:13:41.866 }, 00:13:41.866 { 00:13:41.866 "method": "bdev_wait_for_examine" 00:13:41.866 } 00:13:41.866 ] 00:13:41.866 } 00:13:41.866 ] 00:13:41.866 } 00:13:41.866 [2024-11-26 00:59:04.772538] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:41.866 [2024-11-26 00:59:04.772698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83613 ] 00:13:42.126 [2024-11-26 00:59:04.912136] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:42.126 [2024-11-26 00:59:04.941389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.126 [2024-11-26 00:59:04.981538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.388 Running I/O for 5 seconds... 00:13:44.274 34890.00 IOPS, 136.29 MiB/s [2024-11-26T00:59:08.133Z] 34733.50 IOPS, 135.68 MiB/s [2024-11-26T00:59:09.519Z] 34874.00 IOPS, 136.23 MiB/s [2024-11-26T00:59:10.553Z] 37567.00 IOPS, 146.75 MiB/s 00:13:47.636 Latency(us) 00:13:47.636 [2024-11-26T00:59:10.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.636 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:47.636 xnvme_bdev : 5.00 37545.54 146.66 0.00 0.00 1700.39 727.83 13510.50 00:13:47.636 [2024-11-26T00:59:10.553Z] =================================================================================================================== 00:13:47.636 [2024-11-26T00:59:10.553Z] Total : 37545.54 146.66 0.00 0.00 1700.39 727.83 13510.50 00:13:47.636 00:59:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.636 00:59:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:47.636 00:59:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:47.636 00:59:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:47.636 00:59:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:47.636 { 00:13:47.636 "subsystems": [ 00:13:47.636 { 00:13:47.636 "subsystem": "bdev", 00:13:47.636 "config": [ 00:13:47.636 { 00:13:47.636 "params": { 00:13:47.636 "io_mechanism": "io_uring", 00:13:47.636 "conserve_cpu": true, 00:13:47.636 "filename": "/dev/nvme0n1", 00:13:47.636 "name": "xnvme_bdev" 00:13:47.636 }, 00:13:47.636 "method": "bdev_xnvme_create" 00:13:47.636 }, 00:13:47.636 { 00:13:47.636 "method": "bdev_wait_for_examine" 00:13:47.636 } 00:13:47.636 ] 00:13:47.636 } 00:13:47.636 ] 00:13:47.636 } 00:13:47.636 [2024-11-26 00:59:10.472297] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:13:47.636 [2024-11-26 00:59:10.472417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83683 ] 00:13:47.896 [2024-11-26 00:59:10.608434] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:47.896 [2024-11-26 00:59:10.638132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.896 [2024-11-26 00:59:10.675475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.157 Running I/O for 5 seconds... 00:13:50.041 37434.00 IOPS, 146.23 MiB/s [2024-11-26T00:59:13.901Z] 38320.00 IOPS, 149.69 MiB/s [2024-11-26T00:59:14.845Z] 38405.33 IOPS, 150.02 MiB/s [2024-11-26T00:59:16.231Z] 38216.00 IOPS, 149.28 MiB/s 00:13:53.314 Latency(us) 00:13:53.314 [2024-11-26T00:59:16.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.314 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:53.314 xnvme_bdev : 5.00 38275.49 149.51 0.00 0.00 1668.26 734.13 7309.78 00:13:53.314 [2024-11-26T00:59:16.231Z] =================================================================================================================== 00:13:53.314 [2024-11-26T00:59:16.231Z] Total : 38275.49 149.51 0.00 0.00 1668.26 734.13 7309.78 00:13:53.314 00:13:53.314 real 0m11.375s 00:13:53.314 user 0m5.996s 00:13:53.314 sys 0m4.819s 00:13:53.314 00:59:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:53.314 ************************************ 00:13:53.314 END TEST xnvme_bdevperf 00:13:53.314 ************************************ 00:13:53.314 00:59:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:53.314 00:59:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:53.314 00:59:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:53.314 00:59:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:53.314 00:59:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:53.314 ************************************ 00:13:53.314 START TEST xnvme_fio_plugin 00:13:53.314 ************************************ 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:53.314 00:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.314 { 00:13:53.314 "subsystems": [ 00:13:53.314 { 00:13:53.314 "subsystem": "bdev", 00:13:53.314 "config": [ 00:13:53.314 { 00:13:53.314 "params": { 00:13:53.314 "io_mechanism": "io_uring", 00:13:53.314 "conserve_cpu": true, 00:13:53.314 "filename": "/dev/nvme0n1", 00:13:53.314 "name": "xnvme_bdev" 00:13:53.314 }, 00:13:53.314 "method": "bdev_xnvme_create" 00:13:53.314 }, 00:13:53.314 { 00:13:53.314 "method": "bdev_wait_for_examine" 00:13:53.314 } 00:13:53.314 ] 00:13:53.314 } 00:13:53.314 ] 00:13:53.314 } 00:13:53.575 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:53.575 fio-3.35 00:13:53.575 Starting 1 thread 00:14:00.169 00:14:00.169 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83790: Tue Nov 26 00:59:21 2024 00:14:00.169 read: IOPS=33.6k, BW=131MiB/s (138MB/s)(657MiB/5002msec) 00:14:00.169 slat (nsec): min=2714, max=69247, avg=3355.57, stdev=1774.36 00:14:00.169 clat (usec): min=994, max=3294, avg=1765.31, stdev=244.43 00:14:00.169 lat (usec): min=997, max=3335, avg=1768.66, stdev=244.73 00:14:00.169 clat percentiles (usec): 00:14:00.169 | 1.00th=[ 1319], 5.00th=[ 1418], 10.00th=[ 1483], 20.00th=[ 1565], 00:14:00.169 | 30.00th=[ 1631], 40.00th=[ 1680], 50.00th=[ 1729], 60.00th=[ 1795], 00:14:00.169 | 70.00th=[ 1860], 80.00th=[ 1958], 90.00th=[ 2089], 95.00th=[ 2212], 00:14:00.169 | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 2737], 99.95th=[ 2802], 00:14:00.169 | 99.99th=[ 3130] 00:14:00.169 bw ( KiB/s): min=132096, max=137216, per=99.82%, avg=134257.78, stdev=1634.75, samples=9 00:14:00.169 iops : min=33024, max=34304, avg=33564.44, stdev=408.69, samples=9 00:14:00.169 lat (usec) : 1000=0.01% 00:14:00.169 lat (msec) : 2=84.14%, 4=15.85% 00:14:00.169 cpu : usr=58.53%, sys=37.75%, ctx=13, majf=0, minf=771 00:14:00.169 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:00.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.169 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:00.169 issued rwts: total=168192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.169 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:00.169 00:14:00.169 Run status group 0 (all jobs): 00:14:00.169 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=657MiB (689MB), run=5002-5002msec 00:14:00.169 ----------------------------------------------------- 00:14:00.169 Suppressions used: 00:14:00.169 count bytes template 00:14:00.169 1 11 /usr/src/fio/parse.c 00:14:00.169 1 8 libtcmalloc_minimal.so 00:14:00.169 1 904 libcrypto.so 00:14:00.169 ----------------------------------------------------- 00:14:00.169 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:00.169 00:59:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.169 { 00:14:00.169 "subsystems": [ 00:14:00.169 { 00:14:00.169 "subsystem": "bdev", 00:14:00.169 "config": [ 00:14:00.169 { 00:14:00.169 "params": { 00:14:00.169 "io_mechanism": "io_uring", 00:14:00.169 "conserve_cpu": true, 00:14:00.169 "filename": "/dev/nvme0n1", 00:14:00.169 "name": "xnvme_bdev" 00:14:00.169 }, 00:14:00.169 "method": "bdev_xnvme_create" 00:14:00.169 }, 00:14:00.169 { 00:14:00.169 "method": "bdev_wait_for_examine" 00:14:00.169 } 00:14:00.169 ] 00:14:00.169 } 00:14:00.169 ] 00:14:00.169 } 00:14:00.169 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:00.169 fio-3.35 00:14:00.169 Starting 1 thread 00:14:05.463 00:14:05.463 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83872: Tue Nov 26 00:59:27 2024 00:14:05.463 write: IOPS=35.0k, BW=137MiB/s (143MB/s)(684MiB/5002msec); 0 zone resets 00:14:05.463 slat (usec): min=2, max=119, avg= 3.59, stdev= 1.92 00:14:05.463 clat (usec): min=999, max=5071, avg=1683.55, stdev=270.07 00:14:05.463 lat (usec): min=1002, max=5075, avg=1687.15, stdev=270.48 00:14:05.463 clat percentiles (usec): 00:14:05.463 | 1.00th=[ 1188], 5.00th=[ 1319], 10.00th=[ 1385], 20.00th=[ 1467], 00:14:05.463 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1647], 60.00th=[ 1713], 00:14:05.463 | 70.00th=[ 1795], 80.00th=[ 1876], 90.00th=[ 2024], 95.00th=[ 2147], 00:14:05.463 | 99.00th=[ 2474], 99.50th=[ 2638], 99.90th=[ 3261], 99.95th=[ 3654], 00:14:05.463 | 99.99th=[ 5014] 00:14:05.463 bw ( KiB/s): min=135552, max=140176, per=98.60%, avg=138061.33, stdev=1693.82, samples=9 00:14:05.463 iops : min=33888, max=35044, avg=34515.33, stdev=423.46, samples=9 00:14:05.463 lat (usec) : 1000=0.01% 00:14:05.463 lat (msec) : 2=89.07%, 4=10.89%, 10=0.04% 00:14:05.463 cpu : usr=61.51%, sys=34.65%, ctx=14, majf=0, minf=771 00:14:05.463 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:05.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:05.463 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:05.463 issued rwts: total=0,175094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:05.463 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:05.463 00:14:05.463 Run status group 0 (all jobs): 00:14:05.463 WRITE: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=684MiB (717MB), run=5002-5002msec 00:14:05.463 ----------------------------------------------------- 00:14:05.463 Suppressions used: 00:14:05.463 count bytes template 00:14:05.463 1 11 /usr/src/fio/parse.c 00:14:05.463 1 8 libtcmalloc_minimal.so 00:14:05.463 1 904 libcrypto.so 00:14:05.463 ----------------------------------------------------- 00:14:05.463 00:14:05.463 00:14:05.463 real 0m12.252s 00:14:05.463 user 0m7.271s 00:14:05.463 sys 0m4.287s 00:14:05.724 00:59:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.724 00:59:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:05.724 ************************************ 00:14:05.724 END TEST xnvme_fio_plugin 00:14:05.724 ************************************ 00:14:05.724 00:59:28 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:05.724 00:59:28 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:05.724 00:59:28 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:05.724 00:59:28 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:05.724 00:59:28 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:05.724 00:59:28 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:05.724 00:59:28 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:05.724 00:59:28 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:05.724 00:59:28 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:05.724 00:59:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:05.724 00:59:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.724 00:59:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:05.724 ************************************ 00:14:05.724 START TEST xnvme_rpc 00:14:05.724 ************************************ 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=83953 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 83953 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 83953 ']' 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:05.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:05.724 00:59:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:05.724 [2024-11-26 00:59:28.540243] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:05.724 [2024-11-26 00:59:28.540394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83953 ] 00:14:05.986 [2024-11-26 00:59:28.678624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:05.986 [2024-11-26 00:59:28.706788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.986 [2024-11-26 00:59:28.746808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.562 xnvme_bdev 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.562 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 83953 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 83953 ']' 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 83953 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83953 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.824 killing process with pid 83953 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83953' 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 83953 00:14:06.824 00:59:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 83953 00:14:07.398 00:14:07.398 real 0m1.620s 00:14:07.398 user 0m1.557s 00:14:07.398 sys 0m0.538s 00:14:07.398 00:59:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:07.398 00:59:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.398 ************************************ 00:14:07.398 END TEST xnvme_rpc 00:14:07.398 ************************************ 00:14:07.398 00:59:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:07.398 00:59:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:07.398 00:59:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:07.398 00:59:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.398 ************************************ 00:14:07.398 START TEST xnvme_bdevperf 00:14:07.398 ************************************ 00:14:07.398 00:59:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:07.398 00:59:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:07.398 00:59:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:07.398 00:59:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.398 00:59:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:07.398 00:59:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.398 00:59:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.398 00:59:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.398 { 00:14:07.398 "subsystems": [ 00:14:07.398 { 00:14:07.398 "subsystem": "bdev", 00:14:07.398 "config": [ 00:14:07.398 { 00:14:07.398 "params": { 00:14:07.398 "io_mechanism": "io_uring_cmd", 00:14:07.398 "conserve_cpu": false, 00:14:07.398 "filename": "/dev/ng0n1", 00:14:07.398 "name": "xnvme_bdev" 00:14:07.398 }, 00:14:07.398 "method": "bdev_xnvme_create" 00:14:07.398 }, 00:14:07.398 { 00:14:07.398 "method": "bdev_wait_for_examine" 00:14:07.398 } 00:14:07.398 ] 00:14:07.398 } 00:14:07.398 ] 00:14:07.398 } 00:14:07.398 [2024-11-26 00:59:30.214691] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:07.398 [2024-11-26 00:59:30.214826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84010 ] 00:14:07.660 [2024-11-26 00:59:30.351303] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:07.660 [2024-11-26 00:59:30.381760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.660 [2024-11-26 00:59:30.422446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.660 Running I/O for 5 seconds... 00:14:09.992 39841.00 IOPS, 155.63 MiB/s [2024-11-26T00:59:33.854Z] 39421.00 IOPS, 153.99 MiB/s [2024-11-26T00:59:34.796Z] 40104.33 IOPS, 156.66 MiB/s [2024-11-26T00:59:35.739Z] 42014.75 IOPS, 164.12 MiB/s [2024-11-26T00:59:35.739Z] 42964.00 IOPS, 167.83 MiB/s 00:14:12.822 Latency(us) 00:14:12.822 [2024-11-26T00:59:35.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.822 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:12.822 xnvme_bdev : 5.00 42957.31 167.80 0.00 0.00 1486.91 253.64 9023.80 00:14:12.822 [2024-11-26T00:59:35.739Z] =================================================================================================================== 00:14:12.822 [2024-11-26T00:59:35.739Z] Total : 42957.31 167.80 0.00 0.00 1486.91 253.64 9023.80 00:14:12.822 00:59:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:12.822 00:59:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:12.822 00:59:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:12.822 00:59:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:12.822 00:59:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.084 { 00:14:13.084 "subsystems": [ 00:14:13.084 { 00:14:13.084 "subsystem": "bdev", 00:14:13.084 "config": [ 00:14:13.084 { 00:14:13.084 "params": { 00:14:13.084 "io_mechanism": "io_uring_cmd", 00:14:13.084 "conserve_cpu": false, 00:14:13.084 "filename": "/dev/ng0n1", 00:14:13.084 "name": "xnvme_bdev" 00:14:13.084 }, 00:14:13.084 "method": "bdev_xnvme_create" 00:14:13.084 }, 00:14:13.084 { 00:14:13.084 "method": "bdev_wait_for_examine" 00:14:13.084 } 00:14:13.084 ] 00:14:13.084 } 00:14:13.084 ] 00:14:13.084 } 00:14:13.084 [2024-11-26 00:59:35.792471] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:13.084 [2024-11-26 00:59:35.792591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84079 ] 00:14:13.084 [2024-11-26 00:59:35.933926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:13.084 [2024-11-26 00:59:35.960624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.084 [2024-11-26 00:59:35.993040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.346 Running I/O for 5 seconds... 00:14:15.233 49227.00 IOPS, 192.29 MiB/s [2024-11-26T00:59:39.093Z] 49655.50 IOPS, 193.97 MiB/s [2024-11-26T00:59:40.479Z] 49870.67 IOPS, 194.81 MiB/s [2024-11-26T00:59:41.421Z] 49606.75 IOPS, 193.78 MiB/s 00:14:18.504 Latency(us) 00:14:18.504 [2024-11-26T00:59:41.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:18.504 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:18.504 xnvme_bdev : 5.00 49507.15 193.39 0.00 0.00 1290.16 288.30 4234.63 00:14:18.504 [2024-11-26T00:59:41.421Z] =================================================================================================================== 00:14:18.504 [2024-11-26T00:59:41.421Z] Total : 49507.15 193.39 0.00 0.00 1290.16 288.30 4234.63 00:14:18.504 00:59:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.504 00:59:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:18.504 00:59:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:18.504 00:59:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:18.504 00:59:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.504 { 00:14:18.504 "subsystems": [ 00:14:18.504 { 00:14:18.504 "subsystem": "bdev", 00:14:18.504 "config": [ 00:14:18.504 { 00:14:18.504 "params": { 00:14:18.504 "io_mechanism": "io_uring_cmd", 00:14:18.504 "conserve_cpu": false, 00:14:18.504 "filename": "/dev/ng0n1", 00:14:18.504 "name": "xnvme_bdev" 00:14:18.504 }, 00:14:18.504 "method": "bdev_xnvme_create" 00:14:18.504 }, 00:14:18.504 { 00:14:18.504 "method": "bdev_wait_for_examine" 00:14:18.504 } 00:14:18.504 ] 00:14:18.504 } 00:14:18.504 ] 00:14:18.504 } 00:14:18.504 [2024-11-26 00:59:41.303989] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:18.504 [2024-11-26 00:59:41.304102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84143 ] 00:14:18.763 [2024-11-26 00:59:41.436111] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:18.763 [2024-11-26 00:59:41.460627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.763 [2024-11-26 00:59:41.482280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.763 Running I/O for 5 seconds... 00:14:21.095 92800.00 IOPS, 362.50 MiB/s [2024-11-26T00:59:44.583Z] 92960.00 IOPS, 363.12 MiB/s [2024-11-26T00:59:45.970Z] 92970.67 IOPS, 363.17 MiB/s [2024-11-26T00:59:46.914Z] 92992.00 IOPS, 363.25 MiB/s [2024-11-26T00:59:46.914Z] 92864.00 IOPS, 362.75 MiB/s 00:14:23.997 Latency(us) 00:14:23.997 [2024-11-26T00:59:46.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:23.997 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:23.997 xnvme_bdev : 5.00 92844.43 362.67 0.00 0.00 686.61 409.60 1915.67 00:14:23.997 [2024-11-26T00:59:46.914Z] =================================================================================================================== 00:14:23.997 [2024-11-26T00:59:46.914Z] Total : 92844.43 362.67 0.00 0.00 686.61 409.60 1915.67 00:14:23.997 00:59:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:23.997 00:59:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:23.997 00:59:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:23.997 00:59:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:23.997 00:59:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:23.997 { 00:14:23.997 "subsystems": [ 00:14:23.997 { 00:14:23.997 "subsystem": "bdev", 00:14:23.997 "config": [ 00:14:23.997 { 00:14:23.997 "params": { 00:14:23.997 "io_mechanism": "io_uring_cmd", 00:14:23.997 "conserve_cpu": false, 00:14:23.997 "filename": "/dev/ng0n1", 00:14:23.997 "name": "xnvme_bdev" 00:14:23.997 }, 00:14:23.997 "method": "bdev_xnvme_create" 00:14:23.997 }, 00:14:23.997 { 00:14:23.997 "method": "bdev_wait_for_examine" 00:14:23.997 } 00:14:23.997 ] 00:14:23.997 } 00:14:23.997 ] 00:14:23.997 } 00:14:23.997 [2024-11-26 00:59:46.789208] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:23.997 [2024-11-26 00:59:46.789313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84206 ] 00:14:24.258 [2024-11-26 00:59:46.921386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:24.258 [2024-11-26 00:59:46.944905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.258 [2024-11-26 00:59:46.966831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.258 Running I/O for 5 seconds... 00:14:26.142 49347.00 IOPS, 192.76 MiB/s [2024-11-26T00:59:50.442Z] 38550.50 IOPS, 150.59 MiB/s [2024-11-26T00:59:51.383Z] 34876.33 IOPS, 136.24 MiB/s [2024-11-26T00:59:52.326Z] 33239.00 IOPS, 129.84 MiB/s [2024-11-26T00:59:52.326Z] 31810.40 IOPS, 124.26 MiB/s 00:14:29.409 Latency(us) 00:14:29.409 [2024-11-26T00:59:52.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.409 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:29.409 xnvme_bdev : 5.01 31778.20 124.13 0.00 0.00 2008.91 57.90 24702.03 00:14:29.409 [2024-11-26T00:59:52.326Z] =================================================================================================================== 00:14:29.409 [2024-11-26T00:59:52.326Z] Total : 31778.20 124.13 0.00 0.00 2008.91 57.90 24702.03 00:14:29.409 00:14:29.409 real 0m22.178s 00:14:29.409 user 0m11.435s 00:14:29.409 sys 0m10.324s 00:14:29.409 ************************************ 00:14:29.409 END TEST xnvme_bdevperf 00:14:29.409 ************************************ 00:14:29.409 00:59:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.409 00:59:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:29.670 00:59:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:29.670 00:59:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.670 00:59:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.670 00:59:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.670 ************************************ 00:14:29.670 START TEST xnvme_fio_plugin 00:14:29.670 ************************************ 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:29.670 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:29.671 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:29.671 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:29.671 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:29.671 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:29.671 00:59:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:29.671 { 00:14:29.671 "subsystems": [ 00:14:29.671 { 00:14:29.671 "subsystem": "bdev", 00:14:29.671 "config": [ 00:14:29.671 { 00:14:29.671 "params": { 00:14:29.671 "io_mechanism": "io_uring_cmd", 00:14:29.671 "conserve_cpu": false, 00:14:29.671 "filename": "/dev/ng0n1", 00:14:29.671 "name": "xnvme_bdev" 00:14:29.671 }, 00:14:29.671 "method": "bdev_xnvme_create" 00:14:29.671 }, 00:14:29.671 { 00:14:29.671 "method": "bdev_wait_for_examine" 00:14:29.671 } 00:14:29.671 ] 00:14:29.671 } 00:14:29.671 ] 00:14:29.671 } 00:14:29.931 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:29.932 fio-3.35 00:14:29.932 Starting 1 thread 00:14:35.223 00:14:35.223 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84313: Tue Nov 26 00:59:58 2024 00:14:35.223 read: IOPS=37.8k, BW=148MiB/s (155MB/s)(739MiB/5001msec) 00:14:35.223 slat (nsec): min=2719, max=79669, avg=3700.89, stdev=2057.74 00:14:35.223 clat (usec): min=900, max=3456, avg=1540.31, stdev=239.28 00:14:35.223 lat (usec): min=902, max=3519, avg=1544.01, stdev=239.74 00:14:35.223 clat percentiles (usec): 00:14:35.223 | 1.00th=[ 1090], 5.00th=[ 1205], 10.00th=[ 1270], 20.00th=[ 1352], 00:14:35.223 | 30.00th=[ 1401], 40.00th=[ 1450], 50.00th=[ 1516], 60.00th=[ 1565], 00:14:35.223 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1860], 95.00th=[ 1975], 00:14:35.223 | 99.00th=[ 2212], 99.50th=[ 2343], 99.90th=[ 2737], 99.95th=[ 2900], 00:14:35.223 | 99.99th=[ 3294] 00:14:35.223 bw ( KiB/s): min=140288, max=159232, per=99.53%, avg=150551.22, stdev=4867.25, samples=9 00:14:35.223 iops : min=35072, max=39808, avg=37637.78, stdev=1216.81, samples=9 00:14:35.223 lat (usec) : 1000=0.10% 00:14:35.223 lat (msec) : 2=95.61%, 4=4.28% 00:14:35.223 cpu : usr=33.89%, sys=64.67%, ctx=8, majf=0, minf=771 00:14:35.223 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:35.223 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:35.223 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:35.223 issued rwts: total=189117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:35.223 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:35.223 00:14:35.223 Run status group 0 (all jobs): 00:14:35.223 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=739MiB (775MB), run=5001-5001msec 00:14:35.794 ----------------------------------------------------- 00:14:35.794 Suppressions used: 00:14:35.794 count bytes template 00:14:35.794 1 11 /usr/src/fio/parse.c 00:14:35.794 1 8 libtcmalloc_minimal.so 00:14:35.794 1 904 libcrypto.so 00:14:35.794 ----------------------------------------------------- 00:14:35.794 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:35.794 00:59:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:35.794 { 00:14:35.794 "subsystems": [ 00:14:35.794 { 00:14:35.794 "subsystem": "bdev", 00:14:35.794 "config": [ 00:14:35.794 { 00:14:35.794 "params": { 00:14:35.794 "io_mechanism": "io_uring_cmd", 00:14:35.794 "conserve_cpu": false, 00:14:35.794 "filename": "/dev/ng0n1", 00:14:35.794 "name": "xnvme_bdev" 00:14:35.794 }, 00:14:35.794 "method": "bdev_xnvme_create" 00:14:35.794 }, 00:14:35.794 { 00:14:35.794 "method": "bdev_wait_for_examine" 00:14:35.794 } 00:14:35.794 ] 00:14:35.794 } 00:14:35.794 ] 00:14:35.794 } 00:14:36.054 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:36.054 fio-3.35 00:14:36.054 Starting 1 thread 00:14:41.340 00:14:41.340 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84393: Tue Nov 26 01:00:04 2024 00:14:41.340 write: IOPS=38.8k, BW=152MiB/s (159MB/s)(758MiB/5001msec); 0 zone resets 00:14:41.340 slat (usec): min=2, max=352, avg= 3.75, stdev= 2.72 00:14:41.340 clat (usec): min=210, max=5973, avg=1499.13, stdev=310.89 00:14:41.340 lat (usec): min=218, max=5977, avg=1502.88, stdev=311.21 00:14:41.340 clat percentiles (usec): 00:14:41.340 | 1.00th=[ 979], 5.00th=[ 1090], 10.00th=[ 1156], 20.00th=[ 1254], 00:14:41.340 | 30.00th=[ 1319], 40.00th=[ 1385], 50.00th=[ 1467], 60.00th=[ 1532], 00:14:41.340 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1876], 95.00th=[ 2008], 00:14:41.340 | 99.00th=[ 2343], 99.50th=[ 2638], 99.90th=[ 3589], 99.95th=[ 3916], 00:14:41.340 | 99.99th=[ 5342] 00:14:41.340 bw ( KiB/s): min=140232, max=171456, per=100.00%, avg=156045.33, stdev=13093.29, samples=9 00:14:41.340 iops : min=35058, max=42864, avg=39011.33, stdev=3273.32, samples=9 00:14:41.340 lat (usec) : 250=0.01%, 500=0.06%, 750=0.18%, 1000=1.07% 00:14:41.340 lat (msec) : 2=93.28%, 4=5.36%, 10=0.05% 00:14:41.340 cpu : usr=37.26%, sys=60.70%, ctx=71, majf=0, minf=771 00:14:41.340 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.1%, 16=24.5%, 32=51.2%, >=64=1.6% 00:14:41.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:41.340 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:41.340 issued rwts: total=0,194032,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:41.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:41.340 00:14:41.340 Run status group 0 (all jobs): 00:14:41.340 WRITE: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=758MiB (795MB), run=5001-5001msec 00:14:41.913 ----------------------------------------------------- 00:14:41.913 Suppressions used: 00:14:41.913 count bytes template 00:14:41.913 1 11 /usr/src/fio/parse.c 00:14:41.913 1 8 libtcmalloc_minimal.so 00:14:41.913 1 904 libcrypto.so 00:14:41.913 ----------------------------------------------------- 00:14:41.913 00:14:41.913 00:14:41.913 real 0m12.271s 00:14:41.913 user 0m4.841s 00:14:41.913 sys 0m6.946s 00:14:41.913 01:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:41.913 ************************************ 00:14:41.913 END TEST xnvme_fio_plugin 00:14:41.913 ************************************ 00:14:41.913 01:00:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:41.913 01:00:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:41.913 01:00:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:41.913 01:00:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:41.914 01:00:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:41.914 01:00:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:41.914 01:00:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:41.914 01:00:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.914 ************************************ 00:14:41.914 START TEST xnvme_rpc 00:14:41.914 ************************************ 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=84473 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 84473 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 84473 ']' 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.914 01:00:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:42.175 [2024-11-26 01:00:04.832404] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:42.175 [2024-11-26 01:00:04.832558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84473 ] 00:14:42.175 [2024-11-26 01:00:04.973209] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:42.175 [2024-11-26 01:00:05.002030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.176 [2024-11-26 01:00:05.042410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.750 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:42.750 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:42.750 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:42.750 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.750 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.012 xnvme_bdev 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 84473 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 84473 ']' 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 84473 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84473 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.012 killing process with pid 84473 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84473' 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 84473 00:14:43.012 01:00:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 84473 00:14:43.585 00:14:43.585 real 0m1.622s 00:14:43.585 user 0m1.594s 00:14:43.585 sys 0m0.512s 00:14:43.585 01:00:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.585 ************************************ 00:14:43.585 END TEST xnvme_rpc 00:14:43.585 ************************************ 00:14:43.585 01:00:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.585 01:00:06 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:43.585 01:00:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:43.585 01:00:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.585 01:00:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:43.585 ************************************ 00:14:43.585 START TEST xnvme_bdevperf 00:14:43.585 ************************************ 00:14:43.585 01:00:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:43.585 01:00:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:43.585 01:00:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:43.585 01:00:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:43.585 01:00:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:43.585 01:00:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:43.585 01:00:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:43.585 01:00:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:43.585 { 00:14:43.585 "subsystems": [ 00:14:43.585 { 00:14:43.585 "subsystem": "bdev", 00:14:43.585 "config": [ 00:14:43.585 { 00:14:43.585 "params": { 00:14:43.585 "io_mechanism": "io_uring_cmd", 00:14:43.585 "conserve_cpu": true, 00:14:43.585 "filename": "/dev/ng0n1", 00:14:43.585 "name": "xnvme_bdev" 00:14:43.585 }, 00:14:43.585 "method": "bdev_xnvme_create" 00:14:43.585 }, 00:14:43.585 { 00:14:43.585 "method": "bdev_wait_for_examine" 00:14:43.585 } 00:14:43.585 ] 00:14:43.585 } 00:14:43.585 ] 00:14:43.585 } 00:14:43.846 [2024-11-26 01:00:06.499897] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:43.847 [2024-11-26 01:00:06.500024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84536 ] 00:14:43.847 [2024-11-26 01:00:06.636976] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:43.847 [2024-11-26 01:00:06.668566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.847 [2024-11-26 01:00:06.706730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.108 Running I/O for 5 seconds... 00:14:45.998 43227.00 IOPS, 168.86 MiB/s [2024-11-26T01:00:09.859Z] 41895.50 IOPS, 163.65 MiB/s [2024-11-26T01:00:11.246Z] 41027.67 IOPS, 160.26 MiB/s [2024-11-26T01:00:12.189Z] 40945.75 IOPS, 159.94 MiB/s [2024-11-26T01:00:12.189Z] 40710.80 IOPS, 159.03 MiB/s 00:14:49.272 Latency(us) 00:14:49.272 [2024-11-26T01:00:12.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.272 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:49.272 xnvme_bdev : 5.00 40704.86 159.00 0.00 0.00 1568.66 683.72 9931.22 00:14:49.272 [2024-11-26T01:00:12.189Z] =================================================================================================================== 00:14:49.272 [2024-11-26T01:00:12.189Z] Total : 40704.86 159.00 0.00 0.00 1568.66 683.72 9931.22 00:14:49.272 01:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.272 01:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:49.272 01:00:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:49.272 01:00:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.272 01:00:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.272 { 00:14:49.272 "subsystems": [ 00:14:49.272 { 00:14:49.272 "subsystem": "bdev", 00:14:49.272 "config": [ 00:14:49.272 { 00:14:49.272 "params": { 00:14:49.272 "io_mechanism": "io_uring_cmd", 00:14:49.272 "conserve_cpu": true, 00:14:49.272 "filename": "/dev/ng0n1", 00:14:49.272 "name": "xnvme_bdev" 00:14:49.272 }, 00:14:49.272 "method": "bdev_xnvme_create" 00:14:49.272 }, 00:14:49.272 { 00:14:49.272 "method": "bdev_wait_for_examine" 00:14:49.272 } 00:14:49.272 ] 00:14:49.272 } 00:14:49.272 ] 00:14:49.273 } 00:14:49.534 [2024-11-26 01:00:12.193726] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:49.534 [2024-11-26 01:00:12.193881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84599 ] 00:14:49.534 [2024-11-26 01:00:12.330634] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:49.534 [2024-11-26 01:00:12.361280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.534 [2024-11-26 01:00:12.400127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.796 Running I/O for 5 seconds... 00:14:51.680 43708.00 IOPS, 170.73 MiB/s [2024-11-26T01:00:15.987Z] 44135.50 IOPS, 172.40 MiB/s [2024-11-26T01:00:16.641Z] 44253.00 IOPS, 172.86 MiB/s [2024-11-26T01:00:17.584Z] 44762.00 IOPS, 174.85 MiB/s 00:14:54.667 Latency(us) 00:14:54.667 [2024-11-26T01:00:17.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.667 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:54.667 xnvme_bdev : 5.00 44622.13 174.31 0.00 0.00 1430.41 335.56 8922.98 00:14:54.667 [2024-11-26T01:00:17.584Z] =================================================================================================================== 00:14:54.667 [2024-11-26T01:00:17.584Z] Total : 44622.13 174.31 0.00 0.00 1430.41 335.56 8922.98 00:14:54.927 01:00:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:54.927 01:00:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:54.927 01:00:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:54.927 01:00:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:54.927 01:00:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.927 { 00:14:54.927 "subsystems": [ 00:14:54.927 { 00:14:54.927 "subsystem": "bdev", 00:14:54.927 "config": [ 00:14:54.927 { 00:14:54.927 "params": { 00:14:54.927 "io_mechanism": "io_uring_cmd", 00:14:54.927 "conserve_cpu": true, 00:14:54.927 "filename": "/dev/ng0n1", 00:14:54.927 "name": "xnvme_bdev" 00:14:54.927 }, 00:14:54.927 "method": "bdev_xnvme_create" 00:14:54.927 }, 00:14:54.927 { 00:14:54.927 "method": "bdev_wait_for_examine" 00:14:54.927 } 00:14:54.927 ] 00:14:54.927 } 00:14:54.927 ] 00:14:54.927 } 00:14:55.188 [2024-11-26 01:00:17.872397] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:14:55.188 [2024-11-26 01:00:17.872528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84668 ] 00:14:55.188 [2024-11-26 01:00:18.008406] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:55.188 [2024-11-26 01:00:18.040285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.188 [2024-11-26 01:00:18.079116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.449 Running I/O for 5 seconds... 00:14:57.340 74048.00 IOPS, 289.25 MiB/s [2024-11-26T01:00:21.644Z] 82752.00 IOPS, 323.25 MiB/s [2024-11-26T01:00:22.587Z] 82005.33 IOPS, 320.33 MiB/s [2024-11-26T01:00:23.529Z] 80800.00 IOPS, 315.62 MiB/s 00:15:00.612 Latency(us) 00:15:00.612 [2024-11-26T01:00:23.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.612 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:00.613 xnvme_bdev : 5.00 80468.32 314.33 0.00 0.00 791.90 378.09 2823.09 00:15:00.613 [2024-11-26T01:00:23.530Z] =================================================================================================================== 00:15:00.613 [2024-11-26T01:00:23.530Z] Total : 80468.32 314.33 0.00 0.00 791.90 378.09 2823.09 00:15:00.613 01:00:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:00.613 01:00:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:00.613 01:00:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:00.613 01:00:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:00.613 01:00:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:00.613 { 00:15:00.613 "subsystems": [ 00:15:00.613 { 00:15:00.613 "subsystem": "bdev", 00:15:00.613 "config": [ 00:15:00.613 { 00:15:00.613 "params": { 00:15:00.613 "io_mechanism": "io_uring_cmd", 00:15:00.613 "conserve_cpu": true, 00:15:00.613 "filename": "/dev/ng0n1", 00:15:00.613 "name": "xnvme_bdev" 00:15:00.613 }, 00:15:00.613 "method": "bdev_xnvme_create" 00:15:00.613 }, 00:15:00.613 { 00:15:00.613 "method": "bdev_wait_for_examine" 00:15:00.613 } 00:15:00.613 ] 00:15:00.613 } 00:15:00.613 ] 00:15:00.613 } 00:15:00.873 [2024-11-26 01:00:23.550859] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:00.873 [2024-11-26 01:00:23.550995] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84731 ] 00:15:00.873 [2024-11-26 01:00:23.683924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.873 [2024-11-26 01:00:23.710822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.873 [2024-11-26 01:00:23.733743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.133 Running I/O for 5 seconds... 00:15:03.013 50272.00 IOPS, 196.38 MiB/s [2024-11-26T01:00:26.870Z] 49947.50 IOPS, 195.11 MiB/s [2024-11-26T01:00:28.251Z] 49755.67 IOPS, 194.36 MiB/s [2024-11-26T01:00:29.191Z] 48041.50 IOPS, 187.66 MiB/s [2024-11-26T01:00:29.191Z] 44815.20 IOPS, 175.06 MiB/s 00:15:06.274 Latency(us) 00:15:06.274 [2024-11-26T01:00:29.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.274 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:06.274 xnvme_bdev : 5.00 44785.54 174.94 0.00 0.00 1423.85 75.62 22080.59 00:15:06.274 [2024-11-26T01:00:29.191Z] =================================================================================================================== 00:15:06.274 [2024-11-26T01:00:29.191Z] Total : 44785.54 174.94 0.00 0.00 1423.85 75.62 22080.59 00:15:06.274 00:15:06.274 real 0m22.658s 00:15:06.274 user 0m15.380s 00:15:06.274 sys 0m5.137s 00:15:06.274 01:00:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.274 ************************************ 00:15:06.274 END TEST xnvme_bdevperf 00:15:06.274 ************************************ 00:15:06.274 01:00:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:06.275 01:00:29 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:06.275 01:00:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:06.275 01:00:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.275 01:00:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.275 ************************************ 00:15:06.275 START TEST xnvme_fio_plugin 00:15:06.275 ************************************ 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:06.275 01:00:29 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:06.275 { 00:15:06.275 "subsystems": [ 00:15:06.275 { 00:15:06.275 "subsystem": "bdev", 00:15:06.275 "config": [ 00:15:06.275 { 00:15:06.275 "params": { 00:15:06.275 "io_mechanism": "io_uring_cmd", 00:15:06.275 "conserve_cpu": true, 00:15:06.275 "filename": "/dev/ng0n1", 00:15:06.275 "name": "xnvme_bdev" 00:15:06.275 }, 00:15:06.275 "method": "bdev_xnvme_create" 00:15:06.275 }, 00:15:06.275 { 00:15:06.275 "method": "bdev_wait_for_examine" 00:15:06.275 } 00:15:06.275 ] 00:15:06.275 } 00:15:06.275 ] 00:15:06.275 } 00:15:06.535 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:06.535 fio-3.35 00:15:06.535 Starting 1 thread 00:15:13.120 00:15:13.120 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84838: Tue Nov 26 01:00:34 2024 00:15:13.120 read: IOPS=35.9k, BW=140MiB/s (147MB/s)(701MiB/5001msec) 00:15:13.120 slat (usec): min=2, max=127, avg= 3.77, stdev= 2.31 00:15:13.120 clat (usec): min=869, max=3232, avg=1629.67, stdev=266.31 00:15:13.120 lat (usec): min=872, max=3242, avg=1633.44, stdev=266.87 00:15:13.120 clat percentiles (usec): 00:15:13.120 | 1.00th=[ 1156], 5.00th=[ 1270], 10.00th=[ 1336], 20.00th=[ 1401], 00:15:13.120 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1663], 00:15:13.120 | 70.00th=[ 1745], 80.00th=[ 1844], 90.00th=[ 1991], 95.00th=[ 2114], 00:15:13.120 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 2868], 99.95th=[ 2999], 00:15:13.120 | 99.99th=[ 3163] 00:15:13.120 bw ( KiB/s): min=137728, max=149504, per=100.00%, avg=143758.22, stdev=4220.34, samples=9 00:15:13.120 iops : min=34432, max=37376, avg=35939.56, stdev=1055.08, samples=9 00:15:13.120 lat (usec) : 1000=0.11% 00:15:13.120 lat (msec) : 2=90.44%, 4=9.45% 00:15:13.120 cpu : usr=51.68%, sys=44.86%, ctx=10, majf=0, minf=771 00:15:13.120 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:13.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:13.120 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:13.120 issued rwts: total=179456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:13.120 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:13.120 00:15:13.120 Run status group 0 (all jobs): 00:15:13.120 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=701MiB (735MB), run=5001-5001msec 00:15:13.120 ----------------------------------------------------- 00:15:13.120 Suppressions used: 00:15:13.120 count bytes template 00:15:13.120 1 11 /usr/src/fio/parse.c 00:15:13.120 1 8 libtcmalloc_minimal.so 00:15:13.120 1 904 libcrypto.so 00:15:13.120 ----------------------------------------------------- 00:15:13.120 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.120 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:13.121 01:00:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:13.121 { 00:15:13.121 "subsystems": [ 00:15:13.121 { 00:15:13.121 "subsystem": "bdev", 00:15:13.121 "config": [ 00:15:13.121 { 00:15:13.121 "params": { 00:15:13.121 "io_mechanism": "io_uring_cmd", 00:15:13.121 "conserve_cpu": true, 00:15:13.121 "filename": "/dev/ng0n1", 00:15:13.121 "name": "xnvme_bdev" 00:15:13.121 }, 00:15:13.121 "method": "bdev_xnvme_create" 00:15:13.121 }, 00:15:13.121 { 00:15:13.121 "method": "bdev_wait_for_examine" 00:15:13.121 } 00:15:13.121 ] 00:15:13.121 } 00:15:13.121 ] 00:15:13.121 } 00:15:13.121 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:13.121 fio-3.35 00:15:13.121 Starting 1 thread 00:15:18.428 00:15:18.428 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=84923: Tue Nov 26 01:00:40 2024 00:15:18.428 write: IOPS=40.6k, BW=159MiB/s (166MB/s)(794MiB/5001msec); 0 zone resets 00:15:18.428 slat (usec): min=2, max=248, avg= 3.75, stdev= 2.13 00:15:18.428 clat (usec): min=443, max=4678, avg=1426.79, stdev=288.18 00:15:18.428 lat (usec): min=460, max=4682, avg=1430.53, stdev=288.68 00:15:18.428 clat percentiles (usec): 00:15:18.428 | 1.00th=[ 996], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:15:18.428 | 30.00th=[ 1237], 40.00th=[ 1303], 50.00th=[ 1385], 60.00th=[ 1467], 00:15:18.428 | 70.00th=[ 1549], 80.00th=[ 1663], 90.00th=[ 1795], 95.00th=[ 1926], 00:15:18.428 | 99.00th=[ 2245], 99.50th=[ 2474], 99.90th=[ 3064], 99.95th=[ 3359], 00:15:18.428 | 99.99th=[ 3851] 00:15:18.428 bw ( KiB/s): min=138792, max=185696, per=100.00%, avg=163314.67, stdev=12864.83, samples=9 00:15:18.428 iops : min=34698, max=46424, avg=40828.67, stdev=3216.21, samples=9 00:15:18.428 lat (usec) : 500=0.01%, 750=0.01%, 1000=1.11% 00:15:18.428 lat (msec) : 2=95.43%, 4=3.44%, 10=0.01% 00:15:18.428 cpu : usr=60.82%, sys=34.30%, ctx=8, majf=0, minf=771 00:15:18.428 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:15:18.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:18.428 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:18.428 issued rwts: total=0,203154,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:18.428 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:18.428 00:15:18.428 Run status group 0 (all jobs): 00:15:18.428 WRITE: bw=159MiB/s (166MB/s), 159MiB/s-159MiB/s (166MB/s-166MB/s), io=794MiB (832MB), run=5001-5001msec 00:15:18.689 ----------------------------------------------------- 00:15:18.689 Suppressions used: 00:15:18.689 count bytes template 00:15:18.689 1 11 /usr/src/fio/parse.c 00:15:18.689 1 8 libtcmalloc_minimal.so 00:15:18.689 1 904 libcrypto.so 00:15:18.689 ----------------------------------------------------- 00:15:18.689 00:15:18.689 ************************************ 00:15:18.689 END TEST xnvme_fio_plugin 00:15:18.689 ************************************ 00:15:18.689 00:15:18.689 real 0m12.258s 00:15:18.689 user 0m6.879s 00:15:18.689 sys 0m4.641s 00:15:18.690 01:00:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.690 01:00:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:18.690 01:00:41 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 84473 00:15:18.690 01:00:41 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 84473 ']' 00:15:18.690 01:00:41 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 84473 00:15:18.690 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (84473) - No such process 00:15:18.690 Process with pid 84473 is not found 00:15:18.690 01:00:41 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 84473 is not found' 00:15:18.690 00:15:18.690 real 3m2.102s 00:15:18.690 user 1m26.974s 00:15:18.690 sys 1m20.466s 00:15:18.690 01:00:41 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.690 ************************************ 00:15:18.690 END TEST nvme_xnvme 00:15:18.690 ************************************ 00:15:18.690 01:00:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.690 01:00:41 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:18.690 01:00:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:18.690 01:00:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.690 01:00:41 -- common/autotest_common.sh@10 -- # set +x 00:15:18.690 ************************************ 00:15:18.690 START TEST blockdev_xnvme 00:15:18.690 ************************************ 00:15:18.690 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:18.690 * Looking for test storage... 00:15:18.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:18.951 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:18.951 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:18.951 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:18.951 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:18.951 01:00:41 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:18.951 01:00:41 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:18.951 01:00:41 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:18.951 01:00:41 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:18.951 01:00:41 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:18.951 01:00:41 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:18.952 01:00:41 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:18.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.952 --rc genhtml_branch_coverage=1 00:15:18.952 --rc genhtml_function_coverage=1 00:15:18.952 --rc genhtml_legend=1 00:15:18.952 --rc geninfo_all_blocks=1 00:15:18.952 --rc geninfo_unexecuted_blocks=1 00:15:18.952 00:15:18.952 ' 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:18.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.952 --rc genhtml_branch_coverage=1 00:15:18.952 --rc genhtml_function_coverage=1 00:15:18.952 --rc genhtml_legend=1 00:15:18.952 --rc geninfo_all_blocks=1 00:15:18.952 --rc geninfo_unexecuted_blocks=1 00:15:18.952 00:15:18.952 ' 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:18.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.952 --rc genhtml_branch_coverage=1 00:15:18.952 --rc genhtml_function_coverage=1 00:15:18.952 --rc genhtml_legend=1 00:15:18.952 --rc geninfo_all_blocks=1 00:15:18.952 --rc geninfo_unexecuted_blocks=1 00:15:18.952 00:15:18.952 ' 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:18.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:18.952 --rc genhtml_branch_coverage=1 00:15:18.952 --rc genhtml_function_coverage=1 00:15:18.952 --rc genhtml_legend=1 00:15:18.952 --rc geninfo_all_blocks=1 00:15:18.952 --rc geninfo_unexecuted_blocks=1 00:15:18.952 00:15:18.952 ' 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85052 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85052 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 85052 ']' 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.952 01:00:41 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.952 01:00:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:18.952 [2024-11-26 01:00:41.788719] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:18.952 [2024-11-26 01:00:41.789144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85052 ] 00:15:19.213 [2024-11-26 01:00:41.927246] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:19.213 [2024-11-26 01:00:41.958483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.213 [2024-11-26 01:00:42.000673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.785 01:00:42 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.785 01:00:42 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:19.785 01:00:42 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:19.785 01:00:42 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:19.785 01:00:42 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:19.785 01:00:42 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:19.785 01:00:42 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:20.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:20.932 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:20.932 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:20.932 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:20.932 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:20.932 01:00:43 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:20.932 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:20.933 nvme0n1 00:15:20.933 nvme0n2 00:15:20.933 nvme0n3 00:15:20.933 nvme1n1 00:15:20.933 nvme2n1 00:15:20.933 nvme3n1 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.933 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.933 01:00:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5d12d352-4e34-4a54-963d-a3dadb985539"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5d12d352-4e34-4a54-963d-a3dadb985539",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "852797e6-becc-4787-be12-13146f55e335"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "852797e6-becc-4787-be12-13146f55e335",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "055150b0-9e1a-4f36-966b-c732cdeebe6c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "055150b0-9e1a-4f36-966b-c732cdeebe6c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "541fb87f-1a70-4625-96cc-d6b5397951ce"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "541fb87f-1a70-4625-96cc-d6b5397951ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ae3118f3-c3ba-4ce0-a71c-316ec4ebbc4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ae3118f3-c3ba-4ce0-a71c-316ec4ebbc4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ce173e11-a8ae-4bf8-8cd4-b83a84a00280"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ce173e11-a8ae-4bf8-8cd4-b83a84a00280",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:21.195 01:00:43 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 85052 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 85052 ']' 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 85052 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85052 00:15:21.195 killing process with pid 85052 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.195 01:00:43 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.196 01:00:43 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85052' 00:15:21.196 01:00:43 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 85052 00:15:21.196 01:00:43 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 85052 00:15:21.768 01:00:44 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:21.768 01:00:44 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:21.768 01:00:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:21.768 01:00:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.768 01:00:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:21.768 ************************************ 00:15:21.768 START TEST bdev_hello_world 00:15:21.768 ************************************ 00:15:21.768 01:00:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:21.768 [2024-11-26 01:00:44.537587] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:21.768 [2024-11-26 01:00:44.537761] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85325 ] 00:15:21.768 [2024-11-26 01:00:44.677803] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:22.030 [2024-11-26 01:00:44.704619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.030 [2024-11-26 01:00:44.746371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.292 [2024-11-26 01:00:45.017402] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:22.292 [2024-11-26 01:00:45.017476] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:22.292 [2024-11-26 01:00:45.017501] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:22.292 [2024-11-26 01:00:45.020011] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:22.292 [2024-11-26 01:00:45.021020] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:22.292 [2024-11-26 01:00:45.021075] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:22.292 [2024-11-26 01:00:45.021677] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:22.292 00:15:22.292 [2024-11-26 01:00:45.021718] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:22.554 ************************************ 00:15:22.554 00:15:22.554 real 0m0.815s 00:15:22.554 user 0m0.411s 00:15:22.554 sys 0m0.255s 00:15:22.554 01:00:45 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.554 01:00:45 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:22.554 END TEST bdev_hello_world 00:15:22.554 ************************************ 00:15:22.554 01:00:45 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:22.554 01:00:45 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:22.554 01:00:45 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.554 01:00:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.554 ************************************ 00:15:22.554 START TEST bdev_bounds 00:15:22.554 ************************************ 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=85352 00:15:22.554 Process bdevio pid: 85352 00:15:22.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 85352' 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 85352 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 85352 ']' 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.554 01:00:45 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:22.554 [2024-11-26 01:00:45.413282] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:22.554 [2024-11-26 01:00:45.413431] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85352 ] 00:15:22.816 [2024-11-26 01:00:45.552197] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:22.816 [2024-11-26 01:00:45.578944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:22.816 [2024-11-26 01:00:45.624972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:22.816 [2024-11-26 01:00:45.625119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.816 [2024-11-26 01:00:45.628882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.389 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.389 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:23.389 01:00:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:23.651 I/O targets: 00:15:23.651 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:23.651 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:23.651 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:23.651 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:23.651 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:23.651 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:23.651 00:15:23.651 00:15:23.651 CUnit - A unit testing framework for C - Version 2.1-3 00:15:23.651 http://cunit.sourceforge.net/ 00:15:23.651 00:15:23.651 00:15:23.651 Suite: bdevio tests on: nvme3n1 00:15:23.651 Test: blockdev write read block ...passed 00:15:23.651 Test: blockdev write zeroes read block ...passed 00:15:23.651 Test: blockdev write zeroes read no split ...passed 00:15:23.651 Test: blockdev write zeroes read split ...passed 00:15:23.651 Test: blockdev write zeroes read split partial ...passed 00:15:23.651 Test: blockdev reset ...passed 00:15:23.651 Test: blockdev write read 8 blocks ...passed 00:15:23.651 Test: blockdev write read size > 128k ...passed 00:15:23.651 Test: blockdev write read invalid size ...passed 00:15:23.651 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:23.651 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:23.651 Test: blockdev write read max offset ...passed 00:15:23.651 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:23.651 Test: blockdev writev readv 8 blocks ...passed 00:15:23.651 Test: blockdev writev readv 30 x 1block ...passed 00:15:23.651 Test: blockdev writev readv block ...passed 00:15:23.651 Test: blockdev writev readv size > 128k ...passed 00:15:23.651 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:23.651 Test: blockdev comparev and writev ...passed 00:15:23.651 Test: blockdev nvme passthru rw ...passed 00:15:23.651 Test: blockdev nvme passthru vendor specific ...passed 00:15:23.651 Test: blockdev nvme admin passthru ...passed 00:15:23.651 Test: blockdev copy ...passed 00:15:23.651 Suite: bdevio tests on: nvme2n1 00:15:23.651 Test: blockdev write read block ...passed 00:15:23.651 Test: blockdev write zeroes read block ...passed 00:15:23.651 Test: blockdev write zeroes read no split ...passed 00:15:23.651 Test: blockdev write zeroes read split ...passed 00:15:23.651 Test: blockdev write zeroes read split partial ...passed 00:15:23.651 Test: blockdev reset ...passed 00:15:23.651 Test: blockdev write read 8 blocks ...passed 00:15:23.651 Test: blockdev write read size > 128k ...passed 00:15:23.651 Test: blockdev write read invalid size ...passed 00:15:23.651 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:23.651 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:23.651 Test: blockdev write read max offset ...passed 00:15:23.651 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:23.651 Test: blockdev writev readv 8 blocks ...passed 00:15:23.651 Test: blockdev writev readv 30 x 1block ...passed 00:15:23.651 Test: blockdev writev readv block ...passed 00:15:23.651 Test: blockdev writev readv size > 128k ...passed 00:15:23.651 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:23.651 Test: blockdev comparev and writev ...passed 00:15:23.651 Test: blockdev nvme passthru rw ...passed 00:15:23.651 Test: blockdev nvme passthru vendor specific ...passed 00:15:23.651 Test: blockdev nvme admin passthru ...passed 00:15:23.651 Test: blockdev copy ...passed 00:15:23.651 Suite: bdevio tests on: nvme1n1 00:15:23.651 Test: blockdev write read block ...passed 00:15:23.651 Test: blockdev write zeroes read block ...passed 00:15:23.651 Test: blockdev write zeroes read no split ...passed 00:15:23.651 Test: blockdev write zeroes read split ...passed 00:15:23.651 Test: blockdev write zeroes read split partial ...passed 00:15:23.651 Test: blockdev reset ...passed 00:15:23.651 Test: blockdev write read 8 blocks ...passed 00:15:23.651 Test: blockdev write read size > 128k ...passed 00:15:23.651 Test: blockdev write read invalid size ...passed 00:15:23.651 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:23.651 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:23.651 Test: blockdev write read max offset ...passed 00:15:23.651 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:23.651 Test: blockdev writev readv 8 blocks ...passed 00:15:23.651 Test: blockdev writev readv 30 x 1block ...passed 00:15:23.651 Test: blockdev writev readv block ...passed 00:15:23.651 Test: blockdev writev readv size > 128k ...passed 00:15:23.651 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:23.651 Test: blockdev comparev and writev ...passed 00:15:23.652 Test: blockdev nvme passthru rw ...passed 00:15:23.652 Test: blockdev nvme passthru vendor specific ...passed 00:15:23.652 Test: blockdev nvme admin passthru ...passed 00:15:23.652 Test: blockdev copy ...passed 00:15:23.652 Suite: bdevio tests on: nvme0n3 00:15:23.652 Test: blockdev write read block ...passed 00:15:23.652 Test: blockdev write zeroes read block ...passed 00:15:23.652 Test: blockdev write zeroes read no split ...passed 00:15:23.652 Test: blockdev write zeroes read split ...passed 00:15:23.652 Test: blockdev write zeroes read split partial ...passed 00:15:23.652 Test: blockdev reset ...passed 00:15:23.652 Test: blockdev write read 8 blocks ...passed 00:15:23.652 Test: blockdev write read size > 128k ...passed 00:15:23.652 Test: blockdev write read invalid size ...passed 00:15:23.652 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:23.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:23.652 Test: blockdev write read max offset ...passed 00:15:23.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:23.652 Test: blockdev writev readv 8 blocks ...passed 00:15:23.652 Test: blockdev writev readv 30 x 1block ...passed 00:15:23.652 Test: blockdev writev readv block ...passed 00:15:23.652 Test: blockdev writev readv size > 128k ...passed 00:15:23.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:23.652 Test: blockdev comparev and writev ...passed 00:15:23.652 Test: blockdev nvme passthru rw ...passed 00:15:23.652 Test: blockdev nvme passthru vendor specific ...passed 00:15:23.652 Test: blockdev nvme admin passthru ...passed 00:15:23.652 Test: blockdev copy ...passed 00:15:23.652 Suite: bdevio tests on: nvme0n2 00:15:23.652 Test: blockdev write read block ...passed 00:15:23.652 Test: blockdev write zeroes read block ...passed 00:15:23.652 Test: blockdev write zeroes read no split ...passed 00:15:23.652 Test: blockdev write zeroes read split ...passed 00:15:23.652 Test: blockdev write zeroes read split partial ...passed 00:15:23.652 Test: blockdev reset ...passed 00:15:23.652 Test: blockdev write read 8 blocks ...passed 00:15:23.652 Test: blockdev write read size > 128k ...passed 00:15:23.652 Test: blockdev write read invalid size ...passed 00:15:23.652 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:23.652 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:23.652 Test: blockdev write read max offset ...passed 00:15:23.652 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:23.652 Test: blockdev writev readv 8 blocks ...passed 00:15:23.652 Test: blockdev writev readv 30 x 1block ...passed 00:15:23.652 Test: blockdev writev readv block ...passed 00:15:23.652 Test: blockdev writev readv size > 128k ...passed 00:15:23.652 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:23.652 Test: blockdev comparev and writev ...passed 00:15:23.652 Test: blockdev nvme passthru rw ...passed 00:15:23.652 Test: blockdev nvme passthru vendor specific ...passed 00:15:23.652 Test: blockdev nvme admin passthru ...passed 00:15:23.652 Test: blockdev copy ...passed 00:15:23.652 Suite: bdevio tests on: nvme0n1 00:15:23.652 Test: blockdev write read block ...passed 00:15:23.652 Test: blockdev write zeroes read block ...passed 00:15:23.652 Test: blockdev write zeroes read no split ...passed 00:15:23.913 Test: blockdev write zeroes read split ...passed 00:15:23.914 Test: blockdev write zeroes read split partial ...passed 00:15:23.914 Test: blockdev reset ...passed 00:15:23.914 Test: blockdev write read 8 blocks ...passed 00:15:23.914 Test: blockdev write read size > 128k ...passed 00:15:23.914 Test: blockdev write read invalid size ...passed 00:15:23.914 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:23.914 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:23.914 Test: blockdev write read max offset ...passed 00:15:23.914 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:23.914 Test: blockdev writev readv 8 blocks ...passed 00:15:23.914 Test: blockdev writev readv 30 x 1block ...passed 00:15:23.914 Test: blockdev writev readv block ...passed 00:15:23.914 Test: blockdev writev readv size > 128k ...passed 00:15:23.914 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:23.914 Test: blockdev comparev and writev ...passed 00:15:23.914 Test: blockdev nvme passthru rw ...passed 00:15:23.914 Test: blockdev nvme passthru vendor specific ...passed 00:15:23.914 Test: blockdev nvme admin passthru ...passed 00:15:23.914 Test: blockdev copy ...passed 00:15:23.914 00:15:23.914 Run Summary: Type Total Ran Passed Failed Inactive 00:15:23.914 suites 6 6 n/a 0 0 00:15:23.914 tests 138 138 138 0 0 00:15:23.914 asserts 780 780 780 0 n/a 00:15:23.914 00:15:23.914 Elapsed time = 0.512 seconds 00:15:23.914 0 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 85352 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 85352 ']' 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 85352 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85352 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85352' 00:15:23.914 killing process with pid 85352 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 85352 00:15:23.914 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 85352 00:15:24.175 01:00:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:24.175 00:15:24.175 real 0m1.521s 00:15:24.175 user 0m3.690s 00:15:24.175 sys 0m0.375s 00:15:24.175 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.175 01:00:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 ************************************ 00:15:24.175 END TEST bdev_bounds 00:15:24.175 ************************************ 00:15:24.175 01:00:46 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:24.175 01:00:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:24.175 01:00:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.175 01:00:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 ************************************ 00:15:24.175 START TEST bdev_nbd 00:15:24.175 ************************************ 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:24.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=85400 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 85400 /var/tmp/spdk-nbd.sock 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 85400 ']' 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.175 01:00:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:24.175 [2024-11-26 01:00:46.994374] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:24.175 [2024-11-26 01:00:46.994607] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.437 [2024-11-26 01:00:47.126147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:24.437 [2024-11-26 01:00:47.143109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.437 [2024-11-26 01:00:47.165542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.009 01:00:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:25.270 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.271 1+0 records in 00:15:25.271 1+0 records out 00:15:25.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00124674 s, 3.3 MB/s 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.271 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.532 1+0 records in 00:15:25.532 1+0 records out 00:15:25.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00148395 s, 2.8 MB/s 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.532 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.793 1+0 records in 00:15:25.793 1+0 records out 00:15:25.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098207 s, 4.2 MB/s 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:25.793 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.054 1+0 records in 00:15:26.054 1+0 records out 00:15:26.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000976372 s, 4.2 MB/s 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:26.054 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.315 1+0 records in 00:15:26.315 1+0 records out 00:15:26.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107313 s, 3.8 MB/s 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:26.315 01:00:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:26.315 1+0 records in 00:15:26.315 1+0 records out 00:15:26.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107273 s, 3.8 MB/s 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:26.315 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd0", 00:15:26.576 "bdev_name": "nvme0n1" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd1", 00:15:26.576 "bdev_name": "nvme0n2" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd2", 00:15:26.576 "bdev_name": "nvme0n3" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd3", 00:15:26.576 "bdev_name": "nvme1n1" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd4", 00:15:26.576 "bdev_name": "nvme2n1" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd5", 00:15:26.576 "bdev_name": "nvme3n1" 00:15:26.576 } 00:15:26.576 ]' 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd0", 00:15:26.576 "bdev_name": "nvme0n1" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd1", 00:15:26.576 "bdev_name": "nvme0n2" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd2", 00:15:26.576 "bdev_name": "nvme0n3" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd3", 00:15:26.576 "bdev_name": "nvme1n1" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd4", 00:15:26.576 "bdev_name": "nvme2n1" 00:15:26.576 }, 00:15:26.576 { 00:15:26.576 "nbd_device": "/dev/nbd5", 00:15:26.576 "bdev_name": "nvme3n1" 00:15:26.576 } 00:15:26.576 ]' 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.576 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:26.836 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.096 01:00:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:27.368 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.716 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:28.018 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.281 01:00:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:28.281 /dev/nbd0 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.281 1+0 records in 00:15:28.281 1+0 records out 00:15:28.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000672367 s, 6.1 MB/s 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.281 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.282 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.282 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.282 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.282 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.282 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:28.543 /dev/nbd1 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.543 1+0 records in 00:15:28.543 1+0 records out 00:15:28.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000827654 s, 4.9 MB/s 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.543 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:28.805 /dev/nbd10 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.805 1+0 records in 00:15:28.805 1+0 records out 00:15:28.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000970701 s, 4.2 MB/s 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.805 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:28.806 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:29.067 /dev/nbd11 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.067 1+0 records in 00:15:29.067 1+0 records out 00:15:29.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064547 s, 6.3 MB/s 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:29.067 01:00:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:29.329 /dev/nbd12 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.329 1+0 records in 00:15:29.329 1+0 records out 00:15:29.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132072 s, 3.1 MB/s 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:29.329 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:29.590 /dev/nbd13 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.590 1+0 records in 00:15:29.590 1+0 records out 00:15:29.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950752 s, 4.3 MB/s 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:29.590 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd0", 00:15:29.852 "bdev_name": "nvme0n1" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd1", 00:15:29.852 "bdev_name": "nvme0n2" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd10", 00:15:29.852 "bdev_name": "nvme0n3" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd11", 00:15:29.852 "bdev_name": "nvme1n1" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd12", 00:15:29.852 "bdev_name": "nvme2n1" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd13", 00:15:29.852 "bdev_name": "nvme3n1" 00:15:29.852 } 00:15:29.852 ]' 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd0", 00:15:29.852 "bdev_name": "nvme0n1" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd1", 00:15:29.852 "bdev_name": "nvme0n2" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd10", 00:15:29.852 "bdev_name": "nvme0n3" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd11", 00:15:29.852 "bdev_name": "nvme1n1" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd12", 00:15:29.852 "bdev_name": "nvme2n1" 00:15:29.852 }, 00:15:29.852 { 00:15:29.852 "nbd_device": "/dev/nbd13", 00:15:29.852 "bdev_name": "nvme3n1" 00:15:29.852 } 00:15:29.852 ]' 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:29.852 /dev/nbd1 00:15:29.852 /dev/nbd10 00:15:29.852 /dev/nbd11 00:15:29.852 /dev/nbd12 00:15:29.852 /dev/nbd13' 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:29.852 /dev/nbd1 00:15:29.852 /dev/nbd10 00:15:29.852 /dev/nbd11 00:15:29.852 /dev/nbd12 00:15:29.852 /dev/nbd13' 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:29.852 256+0 records in 00:15:29.852 256+0 records out 00:15:29.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00913502 s, 115 MB/s 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:29.852 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:30.113 256+0 records in 00:15:30.113 256+0 records out 00:15:30.113 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.205816 s, 5.1 MB/s 00:15:30.113 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:30.113 01:00:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:30.376 256+0 records in 00:15:30.376 256+0 records out 00:15:30.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240211 s, 4.4 MB/s 00:15:30.376 01:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:30.376 01:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:30.637 256+0 records in 00:15:30.637 256+0 records out 00:15:30.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243089 s, 4.3 MB/s 00:15:30.637 01:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:30.637 01:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:30.898 256+0 records in 00:15:30.898 256+0 records out 00:15:30.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.216196 s, 4.9 MB/s 00:15:30.898 01:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:30.898 01:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:30.898 256+0 records in 00:15:30.898 256+0 records out 00:15:30.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233028 s, 4.5 MB/s 00:15:30.898 01:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:30.898 01:00:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:31.160 256+0 records in 00:15:31.160 256+0 records out 00:15:31.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222929 s, 4.7 MB/s 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:31.160 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.422 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.684 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.945 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.206 01:00:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.467 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:32.728 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:32.728 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:32.729 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:32.990 malloc_lvol_verify 00:15:32.990 01:00:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:33.250 b6f266ab-62f9-4080-b759-e19c43846171 00:15:33.250 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:33.511 0445ea87-dde7-4e50-92b0-3ad0cbd101be 00:15:33.511 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:33.773 /dev/nbd0 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:33.773 mke2fs 1.47.0 (5-Feb-2023) 00:15:33.773 Discarding device blocks: 0/4096 done 00:15:33.773 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:33.773 00:15:33.773 Allocating group tables: 0/1 done 00:15:33.773 Writing inode tables: 0/1 done 00:15:33.773 Creating journal (1024 blocks): done 00:15:33.773 Writing superblocks and filesystem accounting information: 0/1 done 00:15:33.773 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 85400 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 85400 ']' 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 85400 00:15:33.773 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85400 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.035 killing process with pid 85400 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85400' 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 85400 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 85400 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:34.035 00:15:34.035 real 0m9.951s 00:15:34.035 user 0m13.584s 00:15:34.035 sys 0m3.565s 00:15:34.035 ************************************ 00:15:34.035 END TEST bdev_nbd 00:15:34.035 ************************************ 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.035 01:00:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:34.035 01:00:56 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:34.035 01:00:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:34.035 01:00:56 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:34.035 01:00:56 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:34.035 01:00:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:34.035 01:00:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.035 01:00:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.035 ************************************ 00:15:34.035 START TEST bdev_fio 00:15:34.035 ************************************ 00:15:34.035 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:34.035 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:34.035 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:34.035 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:34.035 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:34.035 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:34.035 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:34.297 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:34.297 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:34.297 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.297 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:34.297 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:34.297 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:34.297 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:34.297 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:34.297 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.298 01:00:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:34.298 ************************************ 00:15:34.298 START TEST bdev_fio_rw_verify 00:15:34.298 ************************************ 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:34.298 01:00:57 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:34.298 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.298 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.298 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.298 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.298 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.298 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:34.298 fio-3.35 00:15:34.298 Starting 6 threads 00:15:46.530 00:15:46.530 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=85796: Tue Nov 26 01:01:07 2024 00:15:46.530 read: IOPS=14.7k, BW=57.4MiB/s (60.2MB/s)(574MiB/10002msec) 00:15:46.530 slat (usec): min=2, max=2091, avg= 6.50, stdev=14.33 00:15:46.530 clat (usec): min=85, max=9824, avg=1346.88, stdev=726.94 00:15:46.530 lat (usec): min=92, max=9829, avg=1353.38, stdev=727.46 00:15:46.530 clat percentiles (usec): 00:15:46.530 | 50.000th=[ 1254], 99.000th=[ 3523], 99.900th=[ 5145], 99.990th=[ 7111], 00:15:46.530 | 99.999th=[ 9765] 00:15:46.530 write: IOPS=15.1k, BW=58.8MiB/s (61.7MB/s)(589MiB/10002msec); 0 zone resets 00:15:46.530 slat (usec): min=12, max=4053, avg=39.29, stdev=132.54 00:15:46.530 clat (usec): min=93, max=6507, avg=1554.97, stdev=754.40 00:15:46.530 lat (usec): min=107, max=6551, avg=1594.26, stdev=765.88 00:15:46.530 clat percentiles (usec): 00:15:46.530 | 50.000th=[ 1450], 99.000th=[ 3818], 99.900th=[ 5080], 99.990th=[ 6194], 00:15:46.530 | 99.999th=[ 6456] 00:15:46.530 bw ( KiB/s): min=49036, max=79929, per=100.00%, avg=60625.16, stdev=1635.22, samples=114 00:15:46.530 iops : min=12256, max=19981, avg=15155.53, stdev=408.81, samples=114 00:15:46.530 lat (usec) : 100=0.01%, 250=1.64%, 500=5.55%, 750=9.02%, 1000=12.51% 00:15:46.530 lat (msec) : 2=51.52%, 4=19.17%, 10=0.58% 00:15:46.530 cpu : usr=44.69%, sys=31.13%, ctx=5027, majf=0, minf=14960 00:15:46.530 IO depths : 1=11.5%, 2=24.0%, 4=51.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:46.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.530 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:46.530 issued rwts: total=146958,150667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:46.530 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:46.530 00:15:46.530 Run status group 0 (all jobs): 00:15:46.530 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=574MiB (602MB), run=10002-10002msec 00:15:46.530 WRITE: bw=58.8MiB/s (61.7MB/s), 58.8MiB/s-58.8MiB/s (61.7MB/s-61.7MB/s), io=589MiB (617MB), run=10002-10002msec 00:15:46.531 ----------------------------------------------------- 00:15:46.531 Suppressions used: 00:15:46.531 count bytes template 00:15:46.531 6 48 /usr/src/fio/parse.c 00:15:46.531 3614 346944 /usr/src/fio/iolog.c 00:15:46.531 1 8 libtcmalloc_minimal.so 00:15:46.531 1 904 libcrypto.so 00:15:46.531 ----------------------------------------------------- 00:15:46.531 00:15:46.531 00:15:46.531 real 0m11.204s 00:15:46.531 user 0m27.566s 00:15:46.531 sys 0m19.014s 00:15:46.531 ************************************ 00:15:46.531 END TEST bdev_fio_rw_verify 00:15:46.531 ************************************ 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "5d12d352-4e34-4a54-963d-a3dadb985539"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5d12d352-4e34-4a54-963d-a3dadb985539",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "852797e6-becc-4787-be12-13146f55e335"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "852797e6-becc-4787-be12-13146f55e335",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "055150b0-9e1a-4f36-966b-c732cdeebe6c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "055150b0-9e1a-4f36-966b-c732cdeebe6c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "541fb87f-1a70-4625-96cc-d6b5397951ce"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "541fb87f-1a70-4625-96cc-d6b5397951ce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "ae3118f3-c3ba-4ce0-a71c-316ec4ebbc4a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ae3118f3-c3ba-4ce0-a71c-316ec4ebbc4a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ce173e11-a8ae-4bf8-8cd4-b83a84a00280"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "ce173e11-a8ae-4bf8-8cd4-b83a84a00280",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:46.531 /home/vagrant/spdk_repo/spdk 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:46.531 00:15:46.531 real 0m11.385s 00:15:46.531 user 0m27.639s 00:15:46.531 sys 0m19.095s 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.531 01:01:08 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:46.531 ************************************ 00:15:46.531 END TEST bdev_fio 00:15:46.531 ************************************ 00:15:46.531 01:01:08 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:46.531 01:01:08 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:46.531 01:01:08 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:46.531 01:01:08 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.531 01:01:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.531 ************************************ 00:15:46.531 START TEST bdev_verify 00:15:46.531 ************************************ 00:15:46.531 01:01:08 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:46.531 [2024-11-26 01:01:08.472356] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:46.531 [2024-11-26 01:01:08.472499] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85960 ] 00:15:46.531 [2024-11-26 01:01:08.610995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:46.531 [2024-11-26 01:01:08.639715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:46.531 [2024-11-26 01:01:08.672030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.531 [2024-11-26 01:01:08.672121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.531 Running I/O for 5 seconds... 00:15:48.420 23680.00 IOPS, 92.50 MiB/s [2024-11-26T01:01:12.282Z] 24128.00 IOPS, 94.25 MiB/s [2024-11-26T01:01:13.227Z] 24661.33 IOPS, 96.33 MiB/s [2024-11-26T01:01:14.172Z] 23896.00 IOPS, 93.34 MiB/s [2024-11-26T01:01:14.172Z] 23788.80 IOPS, 92.92 MiB/s 00:15:51.255 Latency(us) 00:15:51.255 [2024-11-26T01:01:14.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.255 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x0 length 0x80000 00:15:51.255 nvme0n1 : 5.03 1935.12 7.56 0.00 0.00 66022.97 9477.51 68560.74 00:15:51.255 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x80000 length 0x80000 00:15:51.255 nvme0n1 : 5.06 1844.85 7.21 0.00 0.00 69239.27 8620.50 71787.13 00:15:51.255 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x0 length 0x80000 00:15:51.255 nvme0n2 : 5.08 1941.54 7.58 0.00 0.00 65671.71 9326.28 65334.35 00:15:51.255 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x80000 length 0x80000 00:15:51.255 nvme0n2 : 5.06 1822.83 7.12 0.00 0.00 69944.36 13308.85 63721.16 00:15:51.255 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x0 length 0x80000 00:15:51.255 nvme0n3 : 5.08 1939.59 7.58 0.00 0.00 65624.05 4486.70 66544.25 00:15:51.255 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x80000 length 0x80000 00:15:51.255 nvme0n3 : 5.06 1822.26 7.12 0.00 0.00 69837.81 13006.38 64931.05 00:15:51.255 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x0 length 0x20000 00:15:51.255 nvme1n1 : 5.08 1940.88 7.58 0.00 0.00 65452.06 8570.09 63317.86 00:15:51.255 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x20000 length 0x20000 00:15:51.255 nvme1n1 : 5.08 1838.16 7.18 0.00 0.00 69097.05 8469.27 68560.74 00:15:51.255 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x0 length 0xbd0bd 00:15:51.255 nvme2n1 : 5.07 2558.16 9.99 0.00 0.00 49549.28 5041.23 59688.17 00:15:51.255 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:51.255 nvme2n1 : 5.08 2517.50 9.83 0.00 0.00 50191.64 6125.10 68964.04 00:15:51.255 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0x0 length 0xa0000 00:15:51.255 nvme3n1 : 5.09 1835.61 7.17 0.00 0.00 68980.37 1676.21 75820.11 00:15:51.255 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:51.255 Verification LBA range: start 0xa0000 length 0xa0000 00:15:51.255 nvme3n1 : 5.09 1533.48 5.99 0.00 0.00 82384.73 3251.59 96791.63 00:15:51.255 [2024-11-26T01:01:14.172Z] =================================================================================================================== 00:15:51.255 [2024-11-26T01:01:14.172Z] Total : 23529.97 91.91 0.00 0.00 64818.73 1676.21 96791.63 00:15:51.517 00:15:51.517 real 0m5.910s 00:15:51.517 user 0m9.248s 00:15:51.517 sys 0m1.662s 00:15:51.517 01:01:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.517 01:01:14 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:51.517 ************************************ 00:15:51.517 END TEST bdev_verify 00:15:51.517 ************************************ 00:15:51.517 01:01:14 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:51.517 01:01:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:51.517 01:01:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.517 01:01:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.517 ************************************ 00:15:51.517 START TEST bdev_verify_big_io 00:15:51.517 ************************************ 00:15:51.517 01:01:14 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:51.780 [2024-11-26 01:01:14.450273] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:51.780 [2024-11-26 01:01:14.450424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86052 ] 00:15:51.780 [2024-11-26 01:01:14.588895] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:51.780 [2024-11-26 01:01:14.617987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:51.780 [2024-11-26 01:01:14.659349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.780 [2024-11-26 01:01:14.659437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.353 Running I/O for 5 seconds... 00:15:58.191 1140.00 IOPS, 71.25 MiB/s [2024-11-26T01:01:21.369Z] 2271.00 IOPS, 141.94 MiB/s [2024-11-26T01:01:21.630Z] 2516.67 IOPS, 157.29 MiB/s 00:15:58.713 Latency(us) 00:15:58.713 [2024-11-26T01:01:21.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.714 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x0 length 0x8000 00:15:58.714 nvme0n1 : 5.88 106.10 6.63 0.00 0.00 1180687.27 66544.25 2077793.67 00:15:58.714 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x8000 length 0x8000 00:15:58.714 nvme0n1 : 6.13 83.56 5.22 0.00 0.00 1465332.97 84289.38 1729343.80 00:15:58.714 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x0 length 0x8000 00:15:58.714 nvme0n2 : 5.90 108.48 6.78 0.00 0.00 1118803.99 5671.38 987274.63 00:15:58.714 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x8000 length 0x8000 00:15:58.714 nvme0n2 : 6.01 85.21 5.33 0.00 0.00 1363244.90 322638.77 1406705.03 00:15:58.714 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x0 length 0x8000 00:15:58.714 nvme0n3 : 5.89 119.56 7.47 0.00 0.00 991558.39 16232.76 1509949.44 00:15:58.714 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x8000 length 0x8000 00:15:58.714 nvme0n3 : 6.04 95.40 5.96 0.00 0.00 1155504.01 69770.63 2568204.60 00:15:58.714 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x0 length 0x2000 00:15:58.714 nvme1n1 : 5.88 102.31 6.39 0.00 0.00 1123883.70 10889.06 2645637.91 00:15:58.714 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x2000 length 0x2000 00:15:58.714 nvme1n1 : 6.09 105.02 6.56 0.00 0.00 992587.20 5142.06 1077613.49 00:15:58.714 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x0 length 0xbd0b 00:15:58.714 nvme2n1 : 5.88 166.71 10.42 0.00 0.00 665305.83 58881.58 1103424.59 00:15:58.714 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:58.714 nvme2n1 : 6.27 168.31 10.52 0.00 0.00 601250.90 5898.24 2168132.53 00:15:58.714 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0x0 length 0xa000 00:15:58.714 nvme3n1 : 5.89 130.35 8.15 0.00 0.00 825954.92 4461.49 935652.43 00:15:58.714 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:58.714 Verification LBA range: start 0xa000 length 0xa000 00:15:58.714 nvme3n1 : 6.47 222.51 13.91 0.00 0.00 433277.98 614.40 2645637.91 00:15:58.714 [2024-11-26T01:01:21.631Z] =================================================================================================================== 00:15:58.714 [2024-11-26T01:01:21.631Z] Total : 1493.52 93.34 0.00 0.00 897023.51 614.40 2645637.91 00:15:58.976 00:15:58.976 real 0m7.388s 00:15:58.976 user 0m13.589s 00:15:58.976 sys 0m0.474s 00:15:58.976 ************************************ 00:15:58.976 END TEST bdev_verify_big_io 00:15:58.976 ************************************ 00:15:58.976 01:01:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.977 01:01:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:58.977 01:01:21 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:58.977 01:01:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:58.977 01:01:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.977 01:01:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:58.977 ************************************ 00:15:58.977 START TEST bdev_write_zeroes 00:15:58.977 ************************************ 00:15:58.977 01:01:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:59.239 [2024-11-26 01:01:21.894122] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:15:59.239 [2024-11-26 01:01:21.894259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86152 ] 00:15:59.239 [2024-11-26 01:01:22.030908] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:59.239 [2024-11-26 01:01:22.056945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.239 [2024-11-26 01:01:22.090670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.501 Running I/O for 1 seconds... 00:16:00.444 78016.00 IOPS, 304.75 MiB/s 00:16:00.445 Latency(us) 00:16:00.445 [2024-11-26T01:01:23.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.445 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:00.445 nvme0n1 : 1.02 12400.93 48.44 0.00 0.00 10312.24 6351.95 21878.94 00:16:00.445 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:00.445 nvme0n2 : 1.02 12386.88 48.39 0.00 0.00 10318.25 6402.36 22181.42 00:16:00.445 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:00.445 nvme0n3 : 1.02 12372.95 48.33 0.00 0.00 10321.47 6377.16 22584.71 00:16:00.445 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:00.445 nvme1n1 : 1.02 12484.43 48.77 0.00 0.00 10222.72 6351.95 19761.62 00:16:00.445 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:00.445 nvme2n1 : 1.03 15042.55 58.76 0.00 0.00 8477.76 1953.48 20164.92 00:16:00.445 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:00.445 nvme3n1 : 1.03 12344.80 48.22 0.00 0.00 10309.93 5797.42 22282.24 00:16:00.445 [2024-11-26T01:01:23.362Z] =================================================================================================================== 00:16:00.445 [2024-11-26T01:01:23.362Z] Total : 77032.54 300.91 0.00 0.00 9940.77 1953.48 22584.71 00:16:00.706 00:16:00.706 real 0m1.718s 00:16:00.706 user 0m1.045s 00:16:00.706 sys 0m0.498s 00:16:00.706 01:01:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:00.706 01:01:23 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:00.706 ************************************ 00:16:00.706 END TEST bdev_write_zeroes 00:16:00.707 ************************************ 00:16:00.707 01:01:23 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:00.707 01:01:23 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:00.707 01:01:23 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.707 01:01:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.707 ************************************ 00:16:00.707 START TEST bdev_json_nonenclosed 00:16:00.707 ************************************ 00:16:00.707 01:01:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:00.968 [2024-11-26 01:01:23.672173] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:00.968 [2024-11-26 01:01:23.672686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86194 ] 00:16:00.968 [2024-11-26 01:01:23.809278] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:00.968 [2024-11-26 01:01:23.837795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.968 [2024-11-26 01:01:23.876522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.968 [2024-11-26 01:01:23.876646] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:00.968 [2024-11-26 01:01:23.876672] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:00.968 [2024-11-26 01:01:23.876689] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:01.230 00:16:01.230 real 0m0.366s 00:16:01.230 user 0m0.146s 00:16:01.230 sys 0m0.115s 00:16:01.230 01:01:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.230 ************************************ 00:16:01.230 01:01:23 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:01.230 END TEST bdev_json_nonenclosed 00:16:01.230 ************************************ 00:16:01.230 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.230 01:01:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:01.230 01:01:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.230 01:01:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.230 ************************************ 00:16:01.230 START TEST bdev_json_nonarray 00:16:01.230 ************************************ 00:16:01.230 01:01:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.230 [2024-11-26 01:01:24.110761] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:01.230 [2024-11-26 01:01:24.110900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86214 ] 00:16:01.491 [2024-11-26 01:01:24.247545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:01.491 [2024-11-26 01:01:24.274590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.491 [2024-11-26 01:01:24.312554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.491 [2024-11-26 01:01:24.312685] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:01.491 [2024-11-26 01:01:24.312707] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:01.491 [2024-11-26 01:01:24.312721] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:01.752 00:16:01.752 real 0m0.369s 00:16:01.752 user 0m0.145s 00:16:01.752 sys 0m0.120s 00:16:01.752 01:01:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.752 ************************************ 00:16:01.752 END TEST bdev_json_nonarray 00:16:01.752 ************************************ 00:16:01.752 01:01:24 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:01.752 01:01:24 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:02.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:08.910 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.821 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.821 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.821 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:10.821 00:16:10.821 real 0m51.877s 00:16:10.821 user 1m13.264s 00:16:10.821 sys 0m48.276s 00:16:10.821 01:01:33 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.821 01:01:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.821 ************************************ 00:16:10.821 END TEST blockdev_xnvme 00:16:10.821 ************************************ 00:16:10.821 01:01:33 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:10.821 01:01:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:10.821 01:01:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.821 01:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:10.821 ************************************ 00:16:10.821 START TEST ublk 00:16:10.821 ************************************ 00:16:10.821 01:01:33 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:10.821 * Looking for test storage... 00:16:10.821 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:10.821 01:01:33 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:10.821 01:01:33 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:10.821 01:01:33 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:10.821 01:01:33 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:10.821 01:01:33 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:10.821 01:01:33 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:10.821 01:01:33 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:10.821 01:01:33 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:10.821 01:01:33 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:10.821 01:01:33 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:10.821 01:01:33 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:10.821 01:01:33 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:10.821 01:01:33 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:10.821 01:01:33 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:10.821 01:01:33 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:10.822 01:01:33 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:10.822 01:01:33 ublk -- scripts/common.sh@345 -- # : 1 00:16:10.822 01:01:33 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:10.822 01:01:33 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:10.822 01:01:33 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:10.822 01:01:33 ublk -- scripts/common.sh@353 -- # local d=1 00:16:10.822 01:01:33 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:10.822 01:01:33 ublk -- scripts/common.sh@355 -- # echo 1 00:16:10.822 01:01:33 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:10.822 01:01:33 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:10.822 01:01:33 ublk -- scripts/common.sh@353 -- # local d=2 00:16:10.822 01:01:33 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:10.822 01:01:33 ublk -- scripts/common.sh@355 -- # echo 2 00:16:10.822 01:01:33 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:10.822 01:01:33 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:10.822 01:01:33 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:10.822 01:01:33 ublk -- scripts/common.sh@368 -- # return 0 00:16:10.822 01:01:33 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:10.822 01:01:33 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:10.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.822 --rc genhtml_branch_coverage=1 00:16:10.822 --rc genhtml_function_coverage=1 00:16:10.822 --rc genhtml_legend=1 00:16:10.822 --rc geninfo_all_blocks=1 00:16:10.822 --rc geninfo_unexecuted_blocks=1 00:16:10.822 00:16:10.822 ' 00:16:10.822 01:01:33 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:10.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.822 --rc genhtml_branch_coverage=1 00:16:10.822 --rc genhtml_function_coverage=1 00:16:10.822 --rc genhtml_legend=1 00:16:10.822 --rc geninfo_all_blocks=1 00:16:10.822 --rc geninfo_unexecuted_blocks=1 00:16:10.822 00:16:10.822 ' 00:16:10.822 01:01:33 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:10.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.822 --rc genhtml_branch_coverage=1 00:16:10.822 --rc genhtml_function_coverage=1 00:16:10.822 --rc genhtml_legend=1 00:16:10.822 --rc geninfo_all_blocks=1 00:16:10.822 --rc geninfo_unexecuted_blocks=1 00:16:10.822 00:16:10.822 ' 00:16:10.822 01:01:33 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:10.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:10.822 --rc genhtml_branch_coverage=1 00:16:10.822 --rc genhtml_function_coverage=1 00:16:10.822 --rc genhtml_legend=1 00:16:10.822 --rc geninfo_all_blocks=1 00:16:10.822 --rc geninfo_unexecuted_blocks=1 00:16:10.822 00:16:10.822 ' 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:10.822 01:01:33 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:10.822 01:01:33 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:10.822 01:01:33 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:10.822 01:01:33 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:10.822 01:01:33 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:10.822 01:01:33 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:10.822 01:01:33 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:10.822 01:01:33 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:10.822 01:01:33 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:10.822 01:01:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:10.822 01:01:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.822 01:01:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:10.822 ************************************ 00:16:10.822 START TEST test_save_ublk_config 00:16:10.822 ************************************ 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86518 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86518 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86518 ']' 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.822 01:01:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:10.822 [2024-11-26 01:01:33.704577] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:10.822 [2024-11-26 01:01:33.704691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86518 ] 00:16:11.083 [2024-11-26 01:01:33.839425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:11.083 [2024-11-26 01:01:33.869907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.083 [2024-11-26 01:01:33.909880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.656 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.656 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:11.656 01:01:34 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:11.656 01:01:34 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:11.656 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.656 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:11.656 [2024-11-26 01:01:34.553877] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:11.656 [2024-11-26 01:01:34.555070] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:11.917 malloc0 00:16:11.917 [2024-11-26 01:01:34.593131] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:11.917 [2024-11-26 01:01:34.593228] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:11.917 [2024-11-26 01:01:34.593244] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:11.917 [2024-11-26 01:01:34.593253] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:11.917 [2024-11-26 01:01:34.601999] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:11.917 [2024-11-26 01:01:34.602790] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:11.917 [2024-11-26 01:01:34.608901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:11.917 [2024-11-26 01:01:34.609034] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:11.917 [2024-11-26 01:01:34.625881] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:11.917 0 00:16:11.917 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.917 01:01:34 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:11.917 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.917 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:12.179 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.179 01:01:34 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:12.179 "subsystems": [ 00:16:12.179 { 00:16:12.179 "subsystem": "fsdev", 00:16:12.179 "config": [ 00:16:12.179 { 00:16:12.179 "method": "fsdev_set_opts", 00:16:12.179 "params": { 00:16:12.179 "fsdev_io_pool_size": 65535, 00:16:12.179 "fsdev_io_cache_size": 256 00:16:12.179 } 00:16:12.179 } 00:16:12.179 ] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "keyring", 00:16:12.179 "config": [] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "iobuf", 00:16:12.179 "config": [ 00:16:12.179 { 00:16:12.179 "method": "iobuf_set_options", 00:16:12.179 "params": { 00:16:12.179 "small_pool_count": 8192, 00:16:12.179 "large_pool_count": 1024, 00:16:12.179 "small_bufsize": 8192, 00:16:12.179 "large_bufsize": 135168, 00:16:12.179 "enable_numa": false 00:16:12.179 } 00:16:12.179 } 00:16:12.179 ] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "sock", 00:16:12.179 "config": [ 00:16:12.179 { 00:16:12.179 "method": "sock_set_default_impl", 00:16:12.179 "params": { 00:16:12.179 "impl_name": "posix" 00:16:12.179 } 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "method": "sock_impl_set_options", 00:16:12.179 "params": { 00:16:12.179 "impl_name": "ssl", 00:16:12.179 "recv_buf_size": 4096, 00:16:12.179 "send_buf_size": 4096, 00:16:12.179 "enable_recv_pipe": true, 00:16:12.179 "enable_quickack": false, 00:16:12.179 "enable_placement_id": 0, 00:16:12.179 "enable_zerocopy_send_server": true, 00:16:12.179 "enable_zerocopy_send_client": false, 00:16:12.179 "zerocopy_threshold": 0, 00:16:12.179 "tls_version": 0, 00:16:12.179 "enable_ktls": false 00:16:12.179 } 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "method": "sock_impl_set_options", 00:16:12.179 "params": { 00:16:12.179 "impl_name": "posix", 00:16:12.179 "recv_buf_size": 2097152, 00:16:12.179 "send_buf_size": 2097152, 00:16:12.179 "enable_recv_pipe": true, 00:16:12.179 "enable_quickack": false, 00:16:12.179 "enable_placement_id": 0, 00:16:12.179 "enable_zerocopy_send_server": true, 00:16:12.179 "enable_zerocopy_send_client": false, 00:16:12.179 "zerocopy_threshold": 0, 00:16:12.179 "tls_version": 0, 00:16:12.179 "enable_ktls": false 00:16:12.179 } 00:16:12.179 } 00:16:12.179 ] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "vmd", 00:16:12.179 "config": [] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "accel", 00:16:12.179 "config": [ 00:16:12.179 { 00:16:12.179 "method": "accel_set_options", 00:16:12.179 "params": { 00:16:12.179 "small_cache_size": 128, 00:16:12.179 "large_cache_size": 16, 00:16:12.179 "task_count": 2048, 00:16:12.179 "sequence_count": 2048, 00:16:12.179 "buf_count": 2048 00:16:12.179 } 00:16:12.179 } 00:16:12.179 ] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "bdev", 00:16:12.179 "config": [ 00:16:12.179 { 00:16:12.179 "method": "bdev_set_options", 00:16:12.179 "params": { 00:16:12.179 "bdev_io_pool_size": 65535, 00:16:12.179 "bdev_io_cache_size": 256, 00:16:12.179 "bdev_auto_examine": true, 00:16:12.179 "iobuf_small_cache_size": 128, 00:16:12.179 "iobuf_large_cache_size": 16 00:16:12.179 } 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "method": "bdev_raid_set_options", 00:16:12.179 "params": { 00:16:12.179 "process_window_size_kb": 1024, 00:16:12.179 "process_max_bandwidth_mb_sec": 0 00:16:12.179 } 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "method": "bdev_iscsi_set_options", 00:16:12.179 "params": { 00:16:12.179 "timeout_sec": 30 00:16:12.179 } 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "method": "bdev_nvme_set_options", 00:16:12.179 "params": { 00:16:12.179 "action_on_timeout": "none", 00:16:12.179 "timeout_us": 0, 00:16:12.179 "timeout_admin_us": 0, 00:16:12.179 "keep_alive_timeout_ms": 10000, 00:16:12.179 "arbitration_burst": 0, 00:16:12.179 "low_priority_weight": 0, 00:16:12.179 "medium_priority_weight": 0, 00:16:12.179 "high_priority_weight": 0, 00:16:12.179 "nvme_adminq_poll_period_us": 10000, 00:16:12.179 "nvme_ioq_poll_period_us": 0, 00:16:12.179 "io_queue_requests": 0, 00:16:12.179 "delay_cmd_submit": true, 00:16:12.179 "transport_retry_count": 4, 00:16:12.179 "bdev_retry_count": 3, 00:16:12.179 "transport_ack_timeout": 0, 00:16:12.179 "ctrlr_loss_timeout_sec": 0, 00:16:12.179 "reconnect_delay_sec": 0, 00:16:12.179 "fast_io_fail_timeout_sec": 0, 00:16:12.179 "disable_auto_failback": false, 00:16:12.179 "generate_uuids": false, 00:16:12.179 "transport_tos": 0, 00:16:12.179 "nvme_error_stat": false, 00:16:12.179 "rdma_srq_size": 0, 00:16:12.179 "io_path_stat": false, 00:16:12.179 "allow_accel_sequence": false, 00:16:12.179 "rdma_max_cq_size": 0, 00:16:12.179 "rdma_cm_event_timeout_ms": 0, 00:16:12.179 "dhchap_digests": [ 00:16:12.179 "sha256", 00:16:12.179 "sha384", 00:16:12.179 "sha512" 00:16:12.179 ], 00:16:12.179 "dhchap_dhgroups": [ 00:16:12.179 "null", 00:16:12.179 "ffdhe2048", 00:16:12.179 "ffdhe3072", 00:16:12.179 "ffdhe4096", 00:16:12.179 "ffdhe6144", 00:16:12.179 "ffdhe8192" 00:16:12.179 ] 00:16:12.179 } 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "method": "bdev_nvme_set_hotplug", 00:16:12.179 "params": { 00:16:12.179 "period_us": 100000, 00:16:12.179 "enable": false 00:16:12.179 } 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "method": "bdev_malloc_create", 00:16:12.179 "params": { 00:16:12.179 "name": "malloc0", 00:16:12.179 "num_blocks": 8192, 00:16:12.179 "block_size": 4096, 00:16:12.179 "physical_block_size": 4096, 00:16:12.179 "uuid": "bec6408d-d91b-47f4-8810-23d9c96852ac", 00:16:12.179 "optimal_io_boundary": 0, 00:16:12.179 "md_size": 0, 00:16:12.179 "dif_type": 0, 00:16:12.179 "dif_is_head_of_md": false, 00:16:12.179 "dif_pi_format": 0 00:16:12.179 } 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "method": "bdev_wait_for_examine" 00:16:12.179 } 00:16:12.179 ] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "scsi", 00:16:12.179 "config": null 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "scheduler", 00:16:12.179 "config": [ 00:16:12.179 { 00:16:12.179 "method": "framework_set_scheduler", 00:16:12.179 "params": { 00:16:12.179 "name": "static" 00:16:12.179 } 00:16:12.179 } 00:16:12.179 ] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "vhost_scsi", 00:16:12.179 "config": [] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "vhost_blk", 00:16:12.179 "config": [] 00:16:12.179 }, 00:16:12.179 { 00:16:12.179 "subsystem": "ublk", 00:16:12.180 "config": [ 00:16:12.180 { 00:16:12.180 "method": "ublk_create_target", 00:16:12.180 "params": { 00:16:12.180 "cpumask": "1" 00:16:12.180 } 00:16:12.180 }, 00:16:12.180 { 00:16:12.180 "method": "ublk_start_disk", 00:16:12.180 "params": { 00:16:12.180 "bdev_name": "malloc0", 00:16:12.180 "ublk_id": 0, 00:16:12.180 "num_queues": 1, 00:16:12.180 "queue_depth": 128 00:16:12.180 } 00:16:12.180 } 00:16:12.180 ] 00:16:12.180 }, 00:16:12.180 { 00:16:12.180 "subsystem": "nbd", 00:16:12.180 "config": [] 00:16:12.180 }, 00:16:12.180 { 00:16:12.180 "subsystem": "nvmf", 00:16:12.180 "config": [ 00:16:12.180 { 00:16:12.180 "method": "nvmf_set_config", 00:16:12.180 "params": { 00:16:12.180 "discovery_filter": "match_any", 00:16:12.180 "admin_cmd_passthru": { 00:16:12.180 "identify_ctrlr": false 00:16:12.180 }, 00:16:12.180 "dhchap_digests": [ 00:16:12.180 "sha256", 00:16:12.180 "sha384", 00:16:12.180 "sha512" 00:16:12.180 ], 00:16:12.180 "dhchap_dhgroups": [ 00:16:12.180 "null", 00:16:12.180 "ffdhe2048", 00:16:12.180 "ffdhe3072", 00:16:12.180 "ffdhe4096", 00:16:12.180 "ffdhe6144", 00:16:12.180 "ffdhe8192" 00:16:12.180 ] 00:16:12.180 } 00:16:12.180 }, 00:16:12.180 { 00:16:12.180 "method": "nvmf_set_max_subsystems", 00:16:12.180 "params": { 00:16:12.180 "max_subsystems": 1024 00:16:12.180 } 00:16:12.180 }, 00:16:12.180 { 00:16:12.180 "method": "nvmf_set_crdt", 00:16:12.180 "params": { 00:16:12.180 "crdt1": 0, 00:16:12.180 "crdt2": 0, 00:16:12.180 "crdt3": 0 00:16:12.180 } 00:16:12.180 } 00:16:12.180 ] 00:16:12.180 }, 00:16:12.180 { 00:16:12.180 "subsystem": "iscsi", 00:16:12.180 "config": [ 00:16:12.180 { 00:16:12.180 "method": "iscsi_set_options", 00:16:12.180 "params": { 00:16:12.180 "node_base": "iqn.2016-06.io.spdk", 00:16:12.180 "max_sessions": 128, 00:16:12.180 "max_connections_per_session": 2, 00:16:12.180 "max_queue_depth": 64, 00:16:12.180 "default_time2wait": 2, 00:16:12.180 "default_time2retain": 20, 00:16:12.180 "first_burst_length": 8192, 00:16:12.180 "immediate_data": true, 00:16:12.180 "allow_duplicated_isid": false, 00:16:12.180 "error_recovery_level": 0, 00:16:12.180 "nop_timeout": 60, 00:16:12.180 "nop_in_interval": 30, 00:16:12.180 "disable_chap": false, 00:16:12.180 "require_chap": false, 00:16:12.180 "mutual_chap": false, 00:16:12.180 "chap_group": 0, 00:16:12.180 "max_large_datain_per_connection": 64, 00:16:12.180 "max_r2t_per_connection": 4, 00:16:12.180 "pdu_pool_size": 36864, 00:16:12.180 "immediate_data_pool_size": 16384, 00:16:12.180 "data_out_pool_size": 2048 00:16:12.180 } 00:16:12.180 } 00:16:12.180 ] 00:16:12.180 } 00:16:12.180 ] 00:16:12.180 }' 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86518 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86518 ']' 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86518 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86518 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:12.180 killing process with pid 86518 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86518' 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86518 00:16:12.180 01:01:34 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86518 00:16:12.441 [2024-11-26 01:01:35.355778] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:12.703 [2024-11-26 01:01:35.392921] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:12.703 [2024-11-26 01:01:35.393082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:12.703 [2024-11-26 01:01:35.400892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:12.703 [2024-11-26 01:01:35.400963] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:12.703 [2024-11-26 01:01:35.400984] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:12.703 [2024-11-26 01:01:35.401019] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:12.703 [2024-11-26 01:01:35.401190] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86556 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86556 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 86556 ']' 00:16:13.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:13.277 01:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:13.277 "subsystems": [ 00:16:13.277 { 00:16:13.277 "subsystem": "fsdev", 00:16:13.277 "config": [ 00:16:13.277 { 00:16:13.277 "method": "fsdev_set_opts", 00:16:13.277 "params": { 00:16:13.277 "fsdev_io_pool_size": 65535, 00:16:13.277 "fsdev_io_cache_size": 256 00:16:13.277 } 00:16:13.277 } 00:16:13.277 ] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "keyring", 00:16:13.277 "config": [] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "iobuf", 00:16:13.277 "config": [ 00:16:13.277 { 00:16:13.277 "method": "iobuf_set_options", 00:16:13.277 "params": { 00:16:13.277 "small_pool_count": 8192, 00:16:13.277 "large_pool_count": 1024, 00:16:13.277 "small_bufsize": 8192, 00:16:13.277 "large_bufsize": 135168, 00:16:13.277 "enable_numa": false 00:16:13.277 } 00:16:13.277 } 00:16:13.277 ] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "sock", 00:16:13.277 "config": [ 00:16:13.277 { 00:16:13.277 "method": "sock_set_default_impl", 00:16:13.277 "params": { 00:16:13.277 "impl_name": "posix" 00:16:13.277 } 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "method": "sock_impl_set_options", 00:16:13.277 "params": { 00:16:13.277 "impl_name": "ssl", 00:16:13.277 "recv_buf_size": 4096, 00:16:13.277 "send_buf_size": 4096, 00:16:13.277 "enable_recv_pipe": true, 00:16:13.277 "enable_quickack": false, 00:16:13.277 "enable_placement_id": 0, 00:16:13.277 "enable_zerocopy_send_server": true, 00:16:13.277 "enable_zerocopy_send_client": false, 00:16:13.277 "zerocopy_threshold": 0, 00:16:13.277 "tls_version": 0, 00:16:13.277 "enable_ktls": false 00:16:13.277 } 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "method": "sock_impl_set_options", 00:16:13.277 "params": { 00:16:13.277 "impl_name": "posix", 00:16:13.277 "recv_buf_size": 2097152, 00:16:13.277 "send_buf_size": 2097152, 00:16:13.277 "enable_recv_pipe": true, 00:16:13.277 "enable_quickack": false, 00:16:13.277 "enable_placement_id": 0, 00:16:13.277 "enable_zerocopy_send_server": true, 00:16:13.277 "enable_zerocopy_send_client": false, 00:16:13.277 "zerocopy_threshold": 0, 00:16:13.277 "tls_version": 0, 00:16:13.277 "enable_ktls": false 00:16:13.277 } 00:16:13.277 } 00:16:13.277 ] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "vmd", 00:16:13.277 "config": [] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "accel", 00:16:13.277 "config": [ 00:16:13.277 { 00:16:13.277 "method": "accel_set_options", 00:16:13.277 "params": { 00:16:13.277 "small_cache_size": 128, 00:16:13.277 "large_cache_size": 16, 00:16:13.277 "task_count": 2048, 00:16:13.277 "sequence_count": 2048, 00:16:13.277 "buf_count": 2048 00:16:13.277 } 00:16:13.277 } 00:16:13.277 ] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "bdev", 00:16:13.277 "config": [ 00:16:13.277 { 00:16:13.277 "method": "bdev_set_options", 00:16:13.277 "params": { 00:16:13.277 "bdev_io_pool_size": 65535, 00:16:13.277 "bdev_io_cache_size": 256, 00:16:13.277 "bdev_auto_examine": true, 00:16:13.277 "iobuf_small_cache_size": 128, 00:16:13.277 "iobuf_large_cache_size": 16 00:16:13.277 } 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "method": "bdev_raid_set_options", 00:16:13.277 "params": { 00:16:13.277 "process_window_size_kb": 1024, 00:16:13.277 "process_max_bandwidth_mb_sec": 0 00:16:13.277 } 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "method": "bdev_iscsi_set_options", 00:16:13.277 "params": { 00:16:13.277 "timeout_sec": 30 00:16:13.277 } 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "method": "bdev_nvme_set_options", 00:16:13.277 "params": { 00:16:13.277 "action_on_timeout": "none", 00:16:13.277 "timeout_us": 0, 00:16:13.277 "timeout_admin_us": 0, 00:16:13.277 "keep_alive_timeout_ms": 10000, 00:16:13.277 "arbitration_burst": 0, 00:16:13.277 "low_priority_weight": 0, 00:16:13.277 "medium_priority_weight": 0, 00:16:13.277 "high_priority_weight": 0, 00:16:13.277 "nvme_adminq_poll_period_us": 10000, 00:16:13.277 "nvme_ioq_poll_period_us": 0, 00:16:13.277 "io_queue_requests": 0, 00:16:13.277 "delay_cmd_submit": true, 00:16:13.277 "transport_retry_count": 4, 00:16:13.277 "bdev_retry_count": 3, 00:16:13.277 "transport_ack_timeout": 0, 00:16:13.277 "ctrlr_loss_timeout_sec": 0, 00:16:13.277 "reconnect_delay_sec": 0, 00:16:13.277 "fast_io_fail_timeout_sec": 0, 00:16:13.277 "disable_auto_failback": false, 00:16:13.277 "generate_uuids": false, 00:16:13.277 "transport_tos": 0, 00:16:13.277 "nvme_error_stat": false, 00:16:13.277 "rdma_srq_size": 0, 00:16:13.277 "io_path_stat": false, 00:16:13.277 "allow_accel_sequence": false, 00:16:13.277 "rdma_max_cq_size": 0, 00:16:13.277 "rdma_cm_event_timeout_ms": 0, 00:16:13.277 "dhchap_digests": [ 00:16:13.277 "sha256", 00:16:13.277 "sha384", 00:16:13.277 "sha512" 00:16:13.277 ], 00:16:13.277 "dhchap_dhgroups": [ 00:16:13.277 "null", 00:16:13.277 "ffdhe2048", 00:16:13.277 "ffdhe3072", 00:16:13.277 "ffdhe4096", 00:16:13.277 "ffdhe6144", 00:16:13.277 "ffdhe8192" 00:16:13.277 ] 00:16:13.277 } 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "method": "bdev_nvme_set_hotplug", 00:16:13.277 "params": { 00:16:13.277 "period_us": 100000, 00:16:13.277 "enable": false 00:16:13.277 } 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "method": "bdev_malloc_create", 00:16:13.277 "params": { 00:16:13.277 "name": "malloc0", 00:16:13.277 "num_blocks": 8192, 00:16:13.277 "block_size": 4096, 00:16:13.277 "physical_block_size": 4096, 00:16:13.277 "uuid": "bec6408d-d91b-47f4-8810-23d9c96852ac", 00:16:13.277 "optimal_io_boundary": 0, 00:16:13.277 "md_size": 0, 00:16:13.277 "dif_type": 0, 00:16:13.277 "dif_is_head_of_md": false, 00:16:13.277 "dif_pi_format": 0 00:16:13.277 } 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "method": "bdev_wait_for_examine" 00:16:13.277 } 00:16:13.277 ] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "scsi", 00:16:13.277 "config": null 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "scheduler", 00:16:13.277 "config": [ 00:16:13.277 { 00:16:13.277 "method": "framework_set_scheduler", 00:16:13.277 "params": { 00:16:13.277 "name": "static" 00:16:13.277 } 00:16:13.277 } 00:16:13.277 ] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "vhost_scsi", 00:16:13.277 "config": [] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "vhost_blk", 00:16:13.277 "config": [] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "ublk", 00:16:13.277 "config": [ 00:16:13.277 { 00:16:13.277 "method": "ublk_create_target", 00:16:13.277 "params": { 00:16:13.277 "cpumask": "1" 00:16:13.277 } 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "method": "ublk_start_disk", 00:16:13.277 "params": { 00:16:13.277 "bdev_name": "malloc0", 00:16:13.277 "ublk_id": 0, 00:16:13.277 "num_queues": 1, 00:16:13.277 "queue_depth": 128 00:16:13.277 } 00:16:13.277 } 00:16:13.277 ] 00:16:13.277 }, 00:16:13.277 { 00:16:13.277 "subsystem": "nbd", 00:16:13.278 "config": [] 00:16:13.278 }, 00:16:13.278 { 00:16:13.278 "subsystem": "nvmf", 00:16:13.278 "config": [ 00:16:13.278 { 00:16:13.278 "method": "nvmf_set_config", 00:16:13.278 "params": { 00:16:13.278 "discovery_filter": "match_any", 00:16:13.278 "admin_cmd_passthru": { 00:16:13.278 "identify_ctrlr": false 00:16:13.278 }, 00:16:13.278 "dhchap_digests": [ 00:16:13.278 "sha256", 00:16:13.278 "sha384", 00:16:13.278 "sha512" 00:16:13.278 ], 00:16:13.278 "dhchap_dhgroups": [ 00:16:13.278 "null", 00:16:13.278 "ffdhe2048", 00:16:13.278 "ffdhe3072", 00:16:13.278 "ffdhe4096", 00:16:13.278 "ffdhe6144", 00:16:13.278 "ffdhe8192" 00:16:13.278 ] 00:16:13.278 } 00:16:13.278 }, 00:16:13.278 { 00:16:13.278 "method": "nvmf_set_max_subsystems", 00:16:13.278 "params": { 00:16:13.278 "max_subsystems": 1024 00:16:13.278 } 00:16:13.278 }, 00:16:13.278 { 00:16:13.278 "method": "nvmf_set_crdt", 00:16:13.278 "params": { 00:16:13.278 "crdt1": 0, 00:16:13.278 "crdt2": 0, 00:16:13.278 "crdt3": 0 00:16:13.278 } 00:16:13.278 } 00:16:13.278 ] 00:16:13.278 }, 00:16:13.278 { 00:16:13.278 "subsystem": "iscsi", 00:16:13.278 "config": [ 00:16:13.278 { 00:16:13.278 "method": "iscsi_set_options", 00:16:13.278 "params": { 00:16:13.278 "node_base": "iqn.2016-06.io.spdk", 00:16:13.278 "max_sessions": 128, 00:16:13.278 "max_connections_per_session": 2, 00:16:13.278 "max_queue_depth": 64, 00:16:13.278 "default_time2wait": 2, 00:16:13.278 "default_time2retain": 20, 00:16:13.278 "first_burst_length": 8192, 00:16:13.278 "immediate_data": true, 00:16:13.278 "allow_duplicated_isid": false, 00:16:13.278 "error_recovery_level": 0, 00:16:13.278 "nop_timeout": 60, 00:16:13.278 "nop_in_interval": 30, 00:16:13.278 "disable_chap": false, 00:16:13.278 "require_chap": false, 00:16:13.278 "mutual_chap": false, 00:16:13.278 "chap_group": 0, 00:16:13.278 "max_large_datain_per_connection": 64, 00:16:13.278 "max_r2t_per_connection": 4, 00:16:13.278 "pdu_pool_size": 36864, 00:16:13.278 "immediate_data_pool_size": 16384, 00:16:13.278 "data_out_pool_size": 2048 00:16:13.278 } 00:16:13.278 } 00:16:13.278 ] 00:16:13.278 } 00:16:13.278 ] 00:16:13.278 }' 00:16:13.278 [2024-11-26 01:01:36.088774] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:13.278 [2024-11-26 01:01:36.088935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86556 ] 00:16:13.539 [2024-11-26 01:01:36.226506] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:13.539 [2024-11-26 01:01:36.256519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.539 [2024-11-26 01:01:36.281186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.844 [2024-11-26 01:01:36.666868] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:13.844 [2024-11-26 01:01:36.667175] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:13.844 [2024-11-26 01:01:36.674978] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:13.844 [2024-11-26 01:01:36.675065] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:13.844 [2024-11-26 01:01:36.675075] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:13.844 [2024-11-26 01:01:36.675082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:13.844 [2024-11-26 01:01:36.683948] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:13.844 [2024-11-26 01:01:36.683976] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:13.844 [2024-11-26 01:01:36.690877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:13.844 [2024-11-26 01:01:36.690973] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:13.844 [2024-11-26 01:01:36.707869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86556 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 86556 ']' 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 86556 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86556 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86556' 00:16:14.149 killing process with pid 86556 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 86556 00:16:14.149 01:01:36 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 86556 00:16:14.722 [2024-11-26 01:01:37.396698] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:14.722 [2024-11-26 01:01:37.432016] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:14.722 [2024-11-26 01:01:37.432182] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:14.722 [2024-11-26 01:01:37.441887] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:14.722 [2024-11-26 01:01:37.441968] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:14.722 [2024-11-26 01:01:37.441978] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:14.722 [2024-11-26 01:01:37.442012] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:14.722 [2024-11-26 01:01:37.442207] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:15.295 01:01:38 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:15.295 00:16:15.295 real 0m4.422s 00:16:15.295 user 0m2.808s 00:16:15.295 sys 0m2.253s 00:16:15.296 ************************************ 00:16:15.296 END TEST test_save_ublk_config 00:16:15.296 01:01:38 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.296 01:01:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:15.296 ************************************ 00:16:15.296 01:01:38 ublk -- ublk/ublk.sh@139 -- # spdk_pid=86614 00:16:15.296 01:01:38 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:15.296 01:01:38 ublk -- ublk/ublk.sh@141 -- # waitforlisten 86614 00:16:15.296 01:01:38 ublk -- common/autotest_common.sh@835 -- # '[' -z 86614 ']' 00:16:15.296 01:01:38 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.296 01:01:38 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:15.296 01:01:38 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.296 01:01:38 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.296 01:01:38 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.296 01:01:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:15.296 [2024-11-26 01:01:38.183926] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:15.296 [2024-11-26 01:01:38.184039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86614 ] 00:16:15.557 [2024-11-26 01:01:38.317367] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:15.557 [2024-11-26 01:01:38.340275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:15.557 [2024-11-26 01:01:38.368656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.557 [2024-11-26 01:01:38.368673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.129 01:01:39 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.129 01:01:39 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:16.129 01:01:39 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:16.129 01:01:39 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:16.129 01:01:39 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.129 01:01:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:16.129 ************************************ 00:16:16.129 START TEST test_create_ublk 00:16:16.129 ************************************ 00:16:16.129 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:16.129 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:16.129 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.129 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:16.129 [2024-11-26 01:01:39.042864] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:16.129 [2024-11-26 01:01:39.044068] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:16.129 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:16.391 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.391 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:16.391 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:16.391 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.391 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:16.391 [2024-11-26 01:01:39.114973] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:16.391 [2024-11-26 01:01:39.115301] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:16.391 [2024-11-26 01:01:39.115311] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:16.391 [2024-11-26 01:01:39.115316] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:16.391 [2024-11-26 01:01:39.124085] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:16.391 [2024-11-26 01:01:39.124104] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:16.391 [2024-11-26 01:01:39.130870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:16.391 [2024-11-26 01:01:39.131374] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:16.391 [2024-11-26 01:01:39.161870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:16.391 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:16.391 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.391 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:16.391 01:01:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:16.391 { 00:16:16.391 "ublk_device": "/dev/ublkb0", 00:16:16.391 "id": 0, 00:16:16.391 "queue_depth": 512, 00:16:16.391 "num_queues": 4, 00:16:16.391 "bdev_name": "Malloc0" 00:16:16.391 } 00:16:16.391 ]' 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:16.391 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:16.651 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:16.651 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:16.651 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:16.651 01:01:39 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:16.651 01:01:39 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:16.651 fio: verification read phase will never start because write phase uses all of runtime 00:16:16.651 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:16.651 fio-3.35 00:16:16.651 Starting 1 process 00:16:28.888 00:16:28.888 fio_test: (groupid=0, jobs=1): err= 0: pid=86659: Tue Nov 26 01:01:49 2024 00:16:28.888 write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(621MiB/10001msec); 0 zone resets 00:16:28.888 clat (usec): min=39, max=4154, avg=62.09, stdev=127.33 00:16:28.888 lat (usec): min=40, max=4155, avg=62.53, stdev=127.37 00:16:28.888 clat percentiles (usec): 00:16:28.888 | 1.00th=[ 47], 5.00th=[ 49], 10.00th=[ 49], 20.00th=[ 51], 00:16:28.888 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 54], 60.00th=[ 56], 00:16:28.888 | 70.00th=[ 57], 80.00th=[ 60], 90.00th=[ 68], 95.00th=[ 73], 00:16:28.888 | 99.00th=[ 93], 99.50th=[ 215], 99.90th=[ 2999], 99.95th=[ 3523], 00:16:28.888 | 99.99th=[ 3851] 00:16:28.888 bw ( KiB/s): min=26184, max=71752, per=99.64%, avg=63388.63, stdev=10421.42, samples=19 00:16:28.888 iops : min= 6546, max=17938, avg=15847.16, stdev=2605.36, samples=19 00:16:28.888 lat (usec) : 50=16.42%, 100=82.66%, 250=0.58%, 500=0.13%, 750=0.01% 00:16:28.888 lat (usec) : 1000=0.01% 00:16:28.888 lat (msec) : 2=0.05%, 4=0.14%, 10=0.01% 00:16:28.888 cpu : usr=2.41%, sys=16.08%, ctx=159061, majf=0, minf=795 00:16:28.888 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:28.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.888 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:28.888 issued rwts: total=0,159052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:28.888 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:28.889 00:16:28.889 Run status group 0 (all jobs): 00:16:28.889 WRITE: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=621MiB (651MB), run=10001-10001msec 00:16:28.889 00:16:28.889 Disk stats (read/write): 00:16:28.889 ublkb0: ios=0/157222, merge=0/0, ticks=0/7793, in_queue=7794, util=99.06% 00:16:28.889 01:01:49 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 [2024-11-26 01:01:49.600045] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.889 [2024-11-26 01:01:49.641904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.889 [2024-11-26 01:01:49.642512] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.889 [2024-11-26 01:01:49.649873] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.889 [2024-11-26 01:01:49.650125] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:28.889 [2024-11-26 01:01:49.650142] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.889 01:01:49 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 [2024-11-26 01:01:49.665951] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:28.889 request: 00:16:28.889 { 00:16:28.889 "ublk_id": 0, 00:16:28.889 "method": "ublk_stop_disk", 00:16:28.889 "req_id": 1 00:16:28.889 } 00:16:28.889 Got JSON-RPC error response 00:16:28.889 response: 00:16:28.889 { 00:16:28.889 "code": -19, 00:16:28.889 "message": "No such device" 00:16:28.889 } 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:28.889 01:01:49 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 [2024-11-26 01:01:49.681913] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:28.889 [2024-11-26 01:01:49.683255] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:28.889 [2024-11-26 01:01:49.683293] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.889 01:01:49 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.889 01:01:49 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:28.889 01:01:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.889 01:01:49 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:28.889 01:01:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:28.889 01:01:49 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:28.889 01:01:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.889 01:01:49 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:28.889 01:01:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:28.889 01:01:49 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:28.889 00:16:28.889 real 0m10.827s 00:16:28.889 user 0m0.537s 00:16:28.889 sys 0m1.709s 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.889 ************************************ 00:16:28.889 END TEST test_create_ublk 00:16:28.889 ************************************ 00:16:28.889 01:01:49 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 01:01:49 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:28.889 01:01:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:28.889 01:01:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.889 01:01:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 ************************************ 00:16:28.889 START TEST test_create_multi_ublk 00:16:28.889 ************************************ 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 [2024-11-26 01:01:49.924857] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:28.889 [2024-11-26 01:01:49.926046] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.889 01:01:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.889 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:28.889 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:28.889 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.889 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.889 [2024-11-26 01:01:50.032991] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:28.890 [2024-11-26 01:01:50.033317] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:28.890 [2024-11-26 01:01:50.033325] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:28.890 [2024-11-26 01:01:50.033332] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:28.890 [2024-11-26 01:01:50.056880] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:28.890 [2024-11-26 01:01:50.056906] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:28.890 [2024-11-26 01:01:50.068859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:28.890 [2024-11-26 01:01:50.069409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:28.890 [2024-11-26 01:01:50.114863] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.890 [2024-11-26 01:01:50.222963] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:28.890 [2024-11-26 01:01:50.223276] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:28.890 [2024-11-26 01:01:50.223285] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:28.890 [2024-11-26 01:01:50.223290] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:28.890 [2024-11-26 01:01:50.234874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:28.890 [2024-11-26 01:01:50.234892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:28.890 [2024-11-26 01:01:50.246870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:28.890 [2024-11-26 01:01:50.247390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:28.890 [2024-11-26 01:01:50.271865] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.890 [2024-11-26 01:01:50.378961] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:28.890 [2024-11-26 01:01:50.379281] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:28.890 [2024-11-26 01:01:50.379299] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:28.890 [2024-11-26 01:01:50.379305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:28.890 [2024-11-26 01:01:50.390870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:28.890 [2024-11-26 01:01:50.390890] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:28.890 [2024-11-26 01:01:50.402860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:28.890 [2024-11-26 01:01:50.403381] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:28.890 [2024-11-26 01:01:50.442860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.890 [2024-11-26 01:01:50.550966] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:28.890 [2024-11-26 01:01:50.551270] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:28.890 [2024-11-26 01:01:50.551279] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:28.890 [2024-11-26 01:01:50.551284] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:28.890 [2024-11-26 01:01:50.562877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:28.890 [2024-11-26 01:01:50.562892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:28.890 [2024-11-26 01:01:50.574861] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:28.890 [2024-11-26 01:01:50.575372] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:28.890 [2024-11-26 01:01:50.581907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.890 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:28.890 { 00:16:28.890 "ublk_device": "/dev/ublkb0", 00:16:28.890 "id": 0, 00:16:28.890 "queue_depth": 512, 00:16:28.890 "num_queues": 4, 00:16:28.890 "bdev_name": "Malloc0" 00:16:28.890 }, 00:16:28.890 { 00:16:28.890 "ublk_device": "/dev/ublkb1", 00:16:28.890 "id": 1, 00:16:28.890 "queue_depth": 512, 00:16:28.890 "num_queues": 4, 00:16:28.890 "bdev_name": "Malloc1" 00:16:28.890 }, 00:16:28.890 { 00:16:28.890 "ublk_device": "/dev/ublkb2", 00:16:28.890 "id": 2, 00:16:28.890 "queue_depth": 512, 00:16:28.890 "num_queues": 4, 00:16:28.890 "bdev_name": "Malloc2" 00:16:28.890 }, 00:16:28.890 { 00:16:28.890 "ublk_device": "/dev/ublkb3", 00:16:28.890 "id": 3, 00:16:28.890 "queue_depth": 512, 00:16:28.890 "num_queues": 4, 00:16:28.891 "bdev_name": "Malloc3" 00:16:28.891 } 00:16:28.891 ]' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:28.891 01:01:50 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.891 [2024-11-26 01:01:51.266931] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.891 [2024-11-26 01:01:51.308877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.891 [2024-11-26 01:01:51.309577] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.891 [2024-11-26 01:01:51.314860] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.891 [2024-11-26 01:01:51.315099] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:28.891 [2024-11-26 01:01:51.315109] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.891 [2024-11-26 01:01:51.322940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.891 [2024-11-26 01:01:51.362440] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.891 [2024-11-26 01:01:51.363263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.891 [2024-11-26 01:01:51.369864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.891 [2024-11-26 01:01:51.370091] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:28.891 [2024-11-26 01:01:51.370100] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.891 [2024-11-26 01:01:51.384925] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.891 [2024-11-26 01:01:51.426439] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.891 [2024-11-26 01:01:51.427235] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.891 [2024-11-26 01:01:51.433870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.891 [2024-11-26 01:01:51.434100] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:28.891 [2024-11-26 01:01:51.434109] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.891 [2024-11-26 01:01:51.447938] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:28.891 [2024-11-26 01:01:51.490381] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:28.891 [2024-11-26 01:01:51.491115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:28.891 [2024-11-26 01:01:51.496867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:28.891 [2024-11-26 01:01:51.497109] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:28.891 [2024-11-26 01:01:51.497117] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.891 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:28.892 [2024-11-26 01:01:51.688917] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:28.892 [2024-11-26 01:01:51.689764] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:28.892 [2024-11-26 01:01:51.689789] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.892 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.153 01:01:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.153 01:01:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.414 01:01:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.414 01:01:52 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:29.414 01:01:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:29.414 01:01:52 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:29.414 00:16:29.414 real 0m2.192s 00:16:29.414 user 0m0.823s 00:16:29.414 sys 0m0.129s 00:16:29.415 01:01:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.415 01:01:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.415 ************************************ 00:16:29.415 END TEST test_create_multi_ublk 00:16:29.415 ************************************ 00:16:29.415 01:01:52 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:29.415 01:01:52 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:29.415 01:01:52 ublk -- ublk/ublk.sh@130 -- # killprocess 86614 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@954 -- # '[' -z 86614 ']' 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@958 -- # kill -0 86614 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@959 -- # uname 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86614 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.415 killing process with pid 86614 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86614' 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@973 -- # kill 86614 00:16:29.415 01:01:52 ublk -- common/autotest_common.sh@978 -- # wait 86614 00:16:29.676 [2024-11-26 01:01:52.392620] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:29.676 [2024-11-26 01:01:52.392682] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:29.937 00:16:29.937 real 0m19.176s 00:16:29.937 user 0m28.525s 00:16:29.937 sys 0m8.852s 00:16:29.937 01:01:52 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.937 ************************************ 00:16:29.937 END TEST ublk 00:16:29.937 ************************************ 00:16:29.937 01:01:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:29.937 01:01:52 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:29.937 01:01:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:29.937 01:01:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.937 01:01:52 -- common/autotest_common.sh@10 -- # set +x 00:16:29.937 ************************************ 00:16:29.937 START TEST ublk_recovery 00:16:29.937 ************************************ 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:29.937 * Looking for test storage... 00:16:29.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:29.937 01:01:52 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.937 --rc genhtml_branch_coverage=1 00:16:29.937 --rc genhtml_function_coverage=1 00:16:29.937 --rc genhtml_legend=1 00:16:29.937 --rc geninfo_all_blocks=1 00:16:29.937 --rc geninfo_unexecuted_blocks=1 00:16:29.937 00:16:29.937 ' 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.937 --rc genhtml_branch_coverage=1 00:16:29.937 --rc genhtml_function_coverage=1 00:16:29.937 --rc genhtml_legend=1 00:16:29.937 --rc geninfo_all_blocks=1 00:16:29.937 --rc geninfo_unexecuted_blocks=1 00:16:29.937 00:16:29.937 ' 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.937 --rc genhtml_branch_coverage=1 00:16:29.937 --rc genhtml_function_coverage=1 00:16:29.937 --rc genhtml_legend=1 00:16:29.937 --rc geninfo_all_blocks=1 00:16:29.937 --rc geninfo_unexecuted_blocks=1 00:16:29.937 00:16:29.937 ' 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:29.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.937 --rc genhtml_branch_coverage=1 00:16:29.937 --rc genhtml_function_coverage=1 00:16:29.937 --rc genhtml_legend=1 00:16:29.937 --rc geninfo_all_blocks=1 00:16:29.937 --rc geninfo_unexecuted_blocks=1 00:16:29.937 00:16:29.937 ' 00:16:29.937 01:01:52 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:29.937 01:01:52 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:29.937 01:01:52 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:29.937 01:01:52 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:29.937 01:01:52 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:29.937 01:01:52 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:29.937 01:01:52 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:29.937 01:01:52 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:29.937 01:01:52 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:29.937 01:01:52 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:29.937 01:01:52 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=86972 00:16:29.937 01:01:52 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:29.937 01:01:52 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 86972 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 86972 ']' 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.937 01:01:52 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:29.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.937 01:01:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:30.198 [2024-11-26 01:01:52.900389] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:30.198 [2024-11-26 01:01:52.900496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86972 ] 00:16:30.198 [2024-11-26 01:01:53.029321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:30.198 [2024-11-26 01:01:53.053949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:30.198 [2024-11-26 01:01:53.089264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.198 [2024-11-26 01:01:53.089289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:31.142 01:01:53 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.142 [2024-11-26 01:01:53.745866] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:31.142 [2024-11-26 01:01:53.747101] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.142 01:01:53 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.142 malloc0 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.142 01:01:53 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.142 01:01:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:31.142 [2024-11-26 01:01:53.785972] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:31.142 [2024-11-26 01:01:53.786059] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:31.142 [2024-11-26 01:01:53.786069] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:31.142 [2024-11-26 01:01:53.786082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:31.142 [2024-11-26 01:01:53.794965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:31.142 [2024-11-26 01:01:53.794982] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:31.142 [2024-11-26 01:01:53.801867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:31.142 [2024-11-26 01:01:53.801981] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:31.143 [2024-11-26 01:01:53.816868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:31.143 1 00:16:31.143 01:01:53 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.143 01:01:53 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:32.083 01:01:54 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=87005 00:16:32.083 01:01:54 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:32.083 01:01:54 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:32.083 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:32.083 fio-3.35 00:16:32.083 Starting 1 process 00:16:37.354 01:01:59 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 86972 00:16:37.354 01:01:59 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:42.645 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 86972 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:42.645 01:02:04 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87110 00:16:42.645 01:02:04 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:42.645 01:02:04 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:42.645 01:02:04 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87110 00:16:42.645 01:02:04 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 87110 ']' 00:16:42.645 01:02:04 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.645 01:02:04 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.645 01:02:04 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.645 01:02:04 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.645 01:02:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.645 [2024-11-26 01:02:04.915773] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:16:42.645 [2024-11-26 01:02:04.915909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87110 ] 00:16:42.645 [2024-11-26 01:02:05.048805] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:42.645 [2024-11-26 01:02:05.072620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:42.645 [2024-11-26 01:02:05.097921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.645 [2024-11-26 01:02:05.097948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:42.904 01:02:05 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.904 [2024-11-26 01:02:05.705863] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:42.904 [2024-11-26 01:02:05.707077] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.904 01:02:05 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.904 malloc0 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.904 01:02:05 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:42.904 [2024-11-26 01:02:05.745993] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:42.904 [2024-11-26 01:02:05.746020] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:42.904 [2024-11-26 01:02:05.746028] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:42.904 [2024-11-26 01:02:05.753894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:42.904 [2024-11-26 01:02:05.753920] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:42.904 1 00:16:42.904 01:02:05 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.904 01:02:05 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 87005 00:16:43.839 [2024-11-26 01:02:06.753949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:44.097 [2024-11-26 01:02:06.761867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:44.097 [2024-11-26 01:02:06.761885] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:45.030 [2024-11-26 01:02:07.761912] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:45.030 [2024-11-26 01:02:07.765871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:45.030 [2024-11-26 01:02:07.765887] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:45.965 [2024-11-26 01:02:08.765915] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:45.965 [2024-11-26 01:02:08.769863] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:45.965 [2024-11-26 01:02:08.769875] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:45.965 [2024-11-26 01:02:08.769885] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:45.965 [2024-11-26 01:02:08.769964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:07.886 [2024-11-26 01:02:30.152879] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:07.886 [2024-11-26 01:02:30.159525] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:07.886 [2024-11-26 01:02:30.166875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:07.887 [2024-11-26 01:02:30.166893] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:34.431 00:17:34.431 fio_test: (groupid=0, jobs=1): err= 0: pid=87008: Tue Nov 26 01:02:55 2024 00:17:34.431 read: IOPS=13.9k, BW=54.1MiB/s (56.8MB/s)(3248MiB/60002msec) 00:17:34.431 slat (nsec): min=1208, max=150200, avg=5523.78, stdev=1484.80 00:17:34.431 clat (usec): min=1061, max=30344k, avg=4243.31, stdev=246774.80 00:17:34.431 lat (usec): min=1071, max=30344k, avg=4248.84, stdev=246774.80 00:17:34.431 clat percentiles (usec): 00:17:34.431 | 1.00th=[ 1844], 5.00th=[ 1975], 10.00th=[ 2008], 20.00th=[ 2040], 00:17:34.431 | 30.00th=[ 2057], 40.00th=[ 2073], 50.00th=[ 2089], 60.00th=[ 2114], 00:17:34.431 | 70.00th=[ 2114], 80.00th=[ 2147], 90.00th=[ 2442], 95.00th=[ 3195], 00:17:34.431 | 99.00th=[ 5211], 99.50th=[ 5735], 99.90th=[ 7767], 99.95th=[12649], 00:17:34.431 | 99.99th=[13435] 00:17:34.431 bw ( KiB/s): min=42272, max=118248, per=100.00%, avg=110902.92, stdev=13930.16, samples=59 00:17:34.431 iops : min=10568, max=29562, avg=27725.73, stdev=3482.54, samples=59 00:17:34.431 write: IOPS=13.8k, BW=54.0MiB/s (56.7MB/s)(3243MiB/60002msec); 0 zone resets 00:17:34.431 slat (nsec): min=1403, max=283393, avg=5733.14, stdev=1556.56 00:17:34.431 clat (usec): min=1140, max=30344k, avg=4989.24, stdev=284491.70 00:17:34.431 lat (usec): min=1150, max=30344k, avg=4994.97, stdev=284491.69 00:17:34.431 clat percentiles (usec): 00:17:34.431 | 1.00th=[ 1893], 5.00th=[ 2073], 10.00th=[ 2114], 20.00th=[ 2147], 00:17:34.431 | 30.00th=[ 2147], 40.00th=[ 2180], 50.00th=[ 2180], 60.00th=[ 2212], 00:17:34.431 | 70.00th=[ 2212], 80.00th=[ 2245], 90.00th=[ 2474], 95.00th=[ 3130], 00:17:34.431 | 99.00th=[ 5276], 99.50th=[ 5800], 99.90th=[ 7767], 99.95th=[12649], 00:17:34.431 | 99.99th=[13566] 00:17:34.431 bw ( KiB/s): min=41920, max=117616, per=100.00%, avg=110770.17, stdev=13901.66, samples=59 00:17:34.431 iops : min=10480, max=29404, avg=27692.54, stdev=3475.41, samples=59 00:17:34.431 lat (msec) : 2=5.47%, 4=91.54%, 10=2.94%, 20=0.05%, >=2000=0.01% 00:17:34.431 cpu : usr=3.06%, sys=15.87%, ctx=54482, majf=0, minf=13 00:17:34.431 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:34.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:34.431 issued rwts: total=831367,830244,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:34.431 00:17:34.431 Run status group 0 (all jobs): 00:17:34.431 READ: bw=54.1MiB/s (56.8MB/s), 54.1MiB/s-54.1MiB/s (56.8MB/s-56.8MB/s), io=3248MiB (3405MB), run=60002-60002msec 00:17:34.431 WRITE: bw=54.0MiB/s (56.7MB/s), 54.0MiB/s-54.0MiB/s (56.7MB/s-56.7MB/s), io=3243MiB (3401MB), run=60002-60002msec 00:17:34.431 00:17:34.431 Disk stats (read/write): 00:17:34.431 ublkb1: ios=828263/827169, merge=0/0, ticks=3476128/4019243, in_queue=7495372, util=99.88% 00:17:34.431 01:02:55 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.431 [2024-11-26 01:02:55.082089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:34.431 [2024-11-26 01:02:55.116996] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:34.431 [2024-11-26 01:02:55.117159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:34.431 [2024-11-26 01:02:55.126874] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:34.431 [2024-11-26 01:02:55.127000] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:34.431 [2024-11-26 01:02:55.127008] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.431 01:02:55 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.431 [2024-11-26 01:02:55.142927] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:34.431 [2024-11-26 01:02:55.144126] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:34.431 [2024-11-26 01:02:55.144159] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.431 01:02:55 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:34.431 01:02:55 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:34.431 01:02:55 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87110 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 87110 ']' 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 87110 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87110 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.431 killing process with pid 87110 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87110' 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@973 -- # kill 87110 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@978 -- # wait 87110 00:17:34.431 [2024-11-26 01:02:55.410101] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:34.431 [2024-11-26 01:02:55.410156] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:34.431 00:17:34.431 real 1m3.082s 00:17:34.431 user 1m44.219s 00:17:34.431 sys 0m22.709s 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.431 01:02:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:34.431 ************************************ 00:17:34.431 END TEST ublk_recovery 00:17:34.431 ************************************ 00:17:34.431 01:02:55 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:34.431 01:02:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:34.431 01:02:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:34.431 01:02:55 -- common/autotest_common.sh@10 -- # set +x 00:17:34.431 01:02:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:34.431 01:02:55 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:34.431 01:02:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:34.431 01:02:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.431 01:02:55 -- common/autotest_common.sh@10 -- # set +x 00:17:34.431 ************************************ 00:17:34.431 START TEST ftl 00:17:34.431 ************************************ 00:17:34.431 01:02:55 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:34.431 * Looking for test storage... 00:17:34.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:34.431 01:02:55 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:34.431 01:02:55 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:34.431 01:02:55 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:34.431 01:02:56 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:34.431 01:02:56 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.431 01:02:56 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.431 01:02:56 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.431 01:02:56 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.431 01:02:56 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.431 01:02:56 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.432 01:02:56 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.432 01:02:56 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.432 01:02:56 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.432 01:02:56 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.432 01:02:56 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.432 01:02:56 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:34.432 01:02:56 ftl -- scripts/common.sh@345 -- # : 1 00:17:34.432 01:02:56 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.432 01:02:56 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.432 01:02:56 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:34.432 01:02:56 ftl -- scripts/common.sh@353 -- # local d=1 00:17:34.432 01:02:56 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.432 01:02:56 ftl -- scripts/common.sh@355 -- # echo 1 00:17:34.432 01:02:56 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.432 01:02:56 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:34.432 01:02:56 ftl -- scripts/common.sh@353 -- # local d=2 00:17:34.432 01:02:56 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.432 01:02:56 ftl -- scripts/common.sh@355 -- # echo 2 00:17:34.432 01:02:56 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.432 01:02:56 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.432 01:02:56 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.432 01:02:56 ftl -- scripts/common.sh@368 -- # return 0 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:34.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.432 --rc genhtml_branch_coverage=1 00:17:34.432 --rc genhtml_function_coverage=1 00:17:34.432 --rc genhtml_legend=1 00:17:34.432 --rc geninfo_all_blocks=1 00:17:34.432 --rc geninfo_unexecuted_blocks=1 00:17:34.432 00:17:34.432 ' 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:34.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.432 --rc genhtml_branch_coverage=1 00:17:34.432 --rc genhtml_function_coverage=1 00:17:34.432 --rc genhtml_legend=1 00:17:34.432 --rc geninfo_all_blocks=1 00:17:34.432 --rc geninfo_unexecuted_blocks=1 00:17:34.432 00:17:34.432 ' 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:34.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.432 --rc genhtml_branch_coverage=1 00:17:34.432 --rc genhtml_function_coverage=1 00:17:34.432 --rc genhtml_legend=1 00:17:34.432 --rc geninfo_all_blocks=1 00:17:34.432 --rc geninfo_unexecuted_blocks=1 00:17:34.432 00:17:34.432 ' 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:34.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.432 --rc genhtml_branch_coverage=1 00:17:34.432 --rc genhtml_function_coverage=1 00:17:34.432 --rc genhtml_legend=1 00:17:34.432 --rc geninfo_all_blocks=1 00:17:34.432 --rc geninfo_unexecuted_blocks=1 00:17:34.432 00:17:34.432 ' 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:34.432 01:02:56 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:34.432 01:02:56 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:34.432 01:02:56 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:34.432 01:02:56 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:34.432 01:02:56 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:34.432 01:02:56 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.432 01:02:56 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:34.432 01:02:56 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:34.432 01:02:56 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:34.432 01:02:56 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:34.432 01:02:56 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:34.432 01:02:56 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:34.432 01:02:56 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:34.432 01:02:56 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:34.432 01:02:56 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:34.432 01:02:56 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:34.432 01:02:56 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:34.432 01:02:56 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:34.432 01:02:56 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:34.432 01:02:56 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:34.432 01:02:56 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:34.432 01:02:56 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:34.432 01:02:56 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:34.432 01:02:56 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:34.432 01:02:56 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:34.432 01:02:56 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:34.432 01:02:56 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:34.432 01:02:56 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:34.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:34.432 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:34.432 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:34.432 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:34.432 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=87914 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:34.432 01:02:56 ftl -- ftl/ftl.sh@38 -- # waitforlisten 87914 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@835 -- # '[' -z 87914 ']' 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.432 01:02:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:34.432 [2024-11-26 01:02:56.590396] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:17:34.432 [2024-11-26 01:02:56.590533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87914 ] 00:17:34.432 [2024-11-26 01:02:56.734890] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:34.432 [2024-11-26 01:02:56.764210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.432 [2024-11-26 01:02:56.794690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.692 01:02:57 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.692 01:02:57 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:34.692 01:02:57 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:34.953 01:02:57 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:35.214 01:02:58 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:35.214 01:02:58 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@50 -- # break 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:35.788 01:02:58 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:36.049 01:02:58 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:36.049 01:02:58 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:36.049 01:02:58 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:36.049 01:02:58 ftl -- ftl/ftl.sh@63 -- # break 00:17:36.049 01:02:58 ftl -- ftl/ftl.sh@66 -- # killprocess 87914 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@954 -- # '[' -z 87914 ']' 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@958 -- # kill -0 87914 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@959 -- # uname 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87914 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.049 killing process with pid 87914 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87914' 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@973 -- # kill 87914 00:17:36.049 01:02:58 ftl -- common/autotest_common.sh@978 -- # wait 87914 00:17:36.310 01:02:59 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:36.311 01:02:59 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:36.311 01:02:59 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:36.311 01:02:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.311 01:02:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:36.311 ************************************ 00:17:36.311 START TEST ftl_fio_basic 00:17:36.311 ************************************ 00:17:36.311 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:36.587 * Looking for test storage... 00:17:36.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:36.587 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:36.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.588 --rc genhtml_branch_coverage=1 00:17:36.588 --rc genhtml_function_coverage=1 00:17:36.588 --rc genhtml_legend=1 00:17:36.588 --rc geninfo_all_blocks=1 00:17:36.588 --rc geninfo_unexecuted_blocks=1 00:17:36.588 00:17:36.588 ' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:36.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.588 --rc genhtml_branch_coverage=1 00:17:36.588 --rc genhtml_function_coverage=1 00:17:36.588 --rc genhtml_legend=1 00:17:36.588 --rc geninfo_all_blocks=1 00:17:36.588 --rc geninfo_unexecuted_blocks=1 00:17:36.588 00:17:36.588 ' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:36.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.588 --rc genhtml_branch_coverage=1 00:17:36.588 --rc genhtml_function_coverage=1 00:17:36.588 --rc genhtml_legend=1 00:17:36.588 --rc geninfo_all_blocks=1 00:17:36.588 --rc geninfo_unexecuted_blocks=1 00:17:36.588 00:17:36.588 ' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:36.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.588 --rc genhtml_branch_coverage=1 00:17:36.588 --rc genhtml_function_coverage=1 00:17:36.588 --rc genhtml_legend=1 00:17:36.588 --rc geninfo_all_blocks=1 00:17:36.588 --rc geninfo_unexecuted_blocks=1 00:17:36.588 00:17:36.588 ' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=88035 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 88035 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 88035 ']' 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.588 01:02:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:36.588 [2024-11-26 01:02:59.463949] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:17:36.588 [2024-11-26 01:02:59.464100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88035 ] 00:17:36.866 [2024-11-26 01:02:59.601268] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:36.866 [2024-11-26 01:02:59.624324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:36.866 [2024-11-26 01:02:59.642875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.866 [2024-11-26 01:02:59.643093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.866 [2024-11-26 01:02:59.643096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.438 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:37.438 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:37.438 01:03:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:37.438 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:37.438 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:37.438 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:37.438 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:37.438 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:37.695 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:37.695 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:37.695 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:37.695 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:37.695 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:37.695 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:37.695 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:37.695 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:37.953 { 00:17:37.953 "name": "nvme0n1", 00:17:37.953 "aliases": [ 00:17:37.953 "04aaebce-f1a9-439a-a6dc-c1a1c124ff85" 00:17:37.953 ], 00:17:37.953 "product_name": "NVMe disk", 00:17:37.953 "block_size": 4096, 00:17:37.953 "num_blocks": 1310720, 00:17:37.953 "uuid": "04aaebce-f1a9-439a-a6dc-c1a1c124ff85", 00:17:37.953 "numa_id": -1, 00:17:37.953 "assigned_rate_limits": { 00:17:37.953 "rw_ios_per_sec": 0, 00:17:37.953 "rw_mbytes_per_sec": 0, 00:17:37.953 "r_mbytes_per_sec": 0, 00:17:37.953 "w_mbytes_per_sec": 0 00:17:37.953 }, 00:17:37.953 "claimed": false, 00:17:37.953 "zoned": false, 00:17:37.953 "supported_io_types": { 00:17:37.953 "read": true, 00:17:37.953 "write": true, 00:17:37.953 "unmap": true, 00:17:37.953 "flush": true, 00:17:37.953 "reset": true, 00:17:37.953 "nvme_admin": true, 00:17:37.953 "nvme_io": true, 00:17:37.953 "nvme_io_md": false, 00:17:37.953 "write_zeroes": true, 00:17:37.953 "zcopy": false, 00:17:37.953 "get_zone_info": false, 00:17:37.953 "zone_management": false, 00:17:37.953 "zone_append": false, 00:17:37.953 "compare": true, 00:17:37.953 "compare_and_write": false, 00:17:37.953 "abort": true, 00:17:37.953 "seek_hole": false, 00:17:37.953 "seek_data": false, 00:17:37.953 "copy": true, 00:17:37.953 "nvme_iov_md": false 00:17:37.953 }, 00:17:37.953 "driver_specific": { 00:17:37.953 "nvme": [ 00:17:37.953 { 00:17:37.953 "pci_address": "0000:00:11.0", 00:17:37.953 "trid": { 00:17:37.953 "trtype": "PCIe", 00:17:37.953 "traddr": "0000:00:11.0" 00:17:37.953 }, 00:17:37.953 "ctrlr_data": { 00:17:37.953 "cntlid": 0, 00:17:37.953 "vendor_id": "0x1b36", 00:17:37.953 "model_number": "QEMU NVMe Ctrl", 00:17:37.953 "serial_number": "12341", 00:17:37.953 "firmware_revision": "8.0.0", 00:17:37.953 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:37.953 "oacs": { 00:17:37.953 "security": 0, 00:17:37.953 "format": 1, 00:17:37.953 "firmware": 0, 00:17:37.953 "ns_manage": 1 00:17:37.953 }, 00:17:37.953 "multi_ctrlr": false, 00:17:37.953 "ana_reporting": false 00:17:37.953 }, 00:17:37.953 "vs": { 00:17:37.953 "nvme_version": "1.4" 00:17:37.953 }, 00:17:37.953 "ns_data": { 00:17:37.953 "id": 1, 00:17:37.953 "can_share": false 00:17:37.953 } 00:17:37.953 } 00:17:37.953 ], 00:17:37.953 "mp_policy": "active_passive" 00:17:37.953 } 00:17:37.953 } 00:17:37.953 ]' 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:37.953 01:03:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:38.211 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:38.211 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:38.468 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=a2ee783d-d641-4b62-9ab6-93052f13e85b 00:17:38.469 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a2ee783d-d641-4b62-9ab6-93052f13e85b 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:38.727 { 00:17:38.727 "name": "d9544f81-79b7-4856-9a8d-228615a3ad53", 00:17:38.727 "aliases": [ 00:17:38.727 "lvs/nvme0n1p0" 00:17:38.727 ], 00:17:38.727 "product_name": "Logical Volume", 00:17:38.727 "block_size": 4096, 00:17:38.727 "num_blocks": 26476544, 00:17:38.727 "uuid": "d9544f81-79b7-4856-9a8d-228615a3ad53", 00:17:38.727 "assigned_rate_limits": { 00:17:38.727 "rw_ios_per_sec": 0, 00:17:38.727 "rw_mbytes_per_sec": 0, 00:17:38.727 "r_mbytes_per_sec": 0, 00:17:38.727 "w_mbytes_per_sec": 0 00:17:38.727 }, 00:17:38.727 "claimed": false, 00:17:38.727 "zoned": false, 00:17:38.727 "supported_io_types": { 00:17:38.727 "read": true, 00:17:38.727 "write": true, 00:17:38.727 "unmap": true, 00:17:38.727 "flush": false, 00:17:38.727 "reset": true, 00:17:38.727 "nvme_admin": false, 00:17:38.727 "nvme_io": false, 00:17:38.727 "nvme_io_md": false, 00:17:38.727 "write_zeroes": true, 00:17:38.727 "zcopy": false, 00:17:38.727 "get_zone_info": false, 00:17:38.727 "zone_management": false, 00:17:38.727 "zone_append": false, 00:17:38.727 "compare": false, 00:17:38.727 "compare_and_write": false, 00:17:38.727 "abort": false, 00:17:38.727 "seek_hole": true, 00:17:38.727 "seek_data": true, 00:17:38.727 "copy": false, 00:17:38.727 "nvme_iov_md": false 00:17:38.727 }, 00:17:38.727 "driver_specific": { 00:17:38.727 "lvol": { 00:17:38.727 "lvol_store_uuid": "a2ee783d-d641-4b62-9ab6-93052f13e85b", 00:17:38.727 "base_bdev": "nvme0n1", 00:17:38.727 "thin_provision": true, 00:17:38.727 "num_allocated_clusters": 0, 00:17:38.727 "snapshot": false, 00:17:38.727 "clone": false, 00:17:38.727 "esnap_clone": false 00:17:38.727 } 00:17:38.727 } 00:17:38.727 } 00:17:38.727 ]' 00:17:38.727 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:38.985 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:38.986 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:38.986 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:38.986 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:38.986 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:38.986 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:38.986 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:38.986 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:39.243 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:39.243 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:39.243 01:03:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:39.243 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:39.243 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:39.243 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:39.243 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:39.243 01:03:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:39.243 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:39.243 { 00:17:39.243 "name": "d9544f81-79b7-4856-9a8d-228615a3ad53", 00:17:39.243 "aliases": [ 00:17:39.243 "lvs/nvme0n1p0" 00:17:39.243 ], 00:17:39.243 "product_name": "Logical Volume", 00:17:39.243 "block_size": 4096, 00:17:39.243 "num_blocks": 26476544, 00:17:39.243 "uuid": "d9544f81-79b7-4856-9a8d-228615a3ad53", 00:17:39.243 "assigned_rate_limits": { 00:17:39.243 "rw_ios_per_sec": 0, 00:17:39.243 "rw_mbytes_per_sec": 0, 00:17:39.243 "r_mbytes_per_sec": 0, 00:17:39.243 "w_mbytes_per_sec": 0 00:17:39.243 }, 00:17:39.243 "claimed": false, 00:17:39.243 "zoned": false, 00:17:39.243 "supported_io_types": { 00:17:39.243 "read": true, 00:17:39.243 "write": true, 00:17:39.243 "unmap": true, 00:17:39.243 "flush": false, 00:17:39.243 "reset": true, 00:17:39.243 "nvme_admin": false, 00:17:39.243 "nvme_io": false, 00:17:39.243 "nvme_io_md": false, 00:17:39.243 "write_zeroes": true, 00:17:39.243 "zcopy": false, 00:17:39.243 "get_zone_info": false, 00:17:39.243 "zone_management": false, 00:17:39.243 "zone_append": false, 00:17:39.243 "compare": false, 00:17:39.243 "compare_and_write": false, 00:17:39.243 "abort": false, 00:17:39.243 "seek_hole": true, 00:17:39.243 "seek_data": true, 00:17:39.243 "copy": false, 00:17:39.243 "nvme_iov_md": false 00:17:39.243 }, 00:17:39.243 "driver_specific": { 00:17:39.243 "lvol": { 00:17:39.243 "lvol_store_uuid": "a2ee783d-d641-4b62-9ab6-93052f13e85b", 00:17:39.243 "base_bdev": "nvme0n1", 00:17:39.243 "thin_provision": true, 00:17:39.243 "num_allocated_clusters": 0, 00:17:39.243 "snapshot": false, 00:17:39.243 "clone": false, 00:17:39.243 "esnap_clone": false 00:17:39.243 } 00:17:39.243 } 00:17:39.243 } 00:17:39.243 ]' 00:17:39.243 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:39.500 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:39.500 01:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:39.501 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:39.501 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:39.501 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:39.501 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:39.501 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d9544f81-79b7-4856-9a8d-228615a3ad53 00:17:39.758 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:39.758 { 00:17:39.758 "name": "d9544f81-79b7-4856-9a8d-228615a3ad53", 00:17:39.758 "aliases": [ 00:17:39.758 "lvs/nvme0n1p0" 00:17:39.758 ], 00:17:39.758 "product_name": "Logical Volume", 00:17:39.758 "block_size": 4096, 00:17:39.758 "num_blocks": 26476544, 00:17:39.758 "uuid": "d9544f81-79b7-4856-9a8d-228615a3ad53", 00:17:39.758 "assigned_rate_limits": { 00:17:39.758 "rw_ios_per_sec": 0, 00:17:39.758 "rw_mbytes_per_sec": 0, 00:17:39.758 "r_mbytes_per_sec": 0, 00:17:39.758 "w_mbytes_per_sec": 0 00:17:39.758 }, 00:17:39.758 "claimed": false, 00:17:39.758 "zoned": false, 00:17:39.758 "supported_io_types": { 00:17:39.758 "read": true, 00:17:39.758 "write": true, 00:17:39.758 "unmap": true, 00:17:39.758 "flush": false, 00:17:39.758 "reset": true, 00:17:39.758 "nvme_admin": false, 00:17:39.758 "nvme_io": false, 00:17:39.758 "nvme_io_md": false, 00:17:39.758 "write_zeroes": true, 00:17:39.758 "zcopy": false, 00:17:39.758 "get_zone_info": false, 00:17:39.758 "zone_management": false, 00:17:39.758 "zone_append": false, 00:17:39.758 "compare": false, 00:17:39.758 "compare_and_write": false, 00:17:39.758 "abort": false, 00:17:39.758 "seek_hole": true, 00:17:39.758 "seek_data": true, 00:17:39.758 "copy": false, 00:17:39.758 "nvme_iov_md": false 00:17:39.758 }, 00:17:39.758 "driver_specific": { 00:17:39.758 "lvol": { 00:17:39.758 "lvol_store_uuid": "a2ee783d-d641-4b62-9ab6-93052f13e85b", 00:17:39.758 "base_bdev": "nvme0n1", 00:17:39.758 "thin_provision": true, 00:17:39.758 "num_allocated_clusters": 0, 00:17:39.758 "snapshot": false, 00:17:39.758 "clone": false, 00:17:39.758 "esnap_clone": false 00:17:39.758 } 00:17:39.758 } 00:17:39.758 } 00:17:39.758 ]' 00:17:39.758 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:39.758 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:39.758 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:39.758 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:39.759 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:39.759 01:03:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:39.759 01:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:39.759 01:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:39.759 01:03:02 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d9544f81-79b7-4856-9a8d-228615a3ad53 -c nvc0n1p0 --l2p_dram_limit 60 00:17:40.017 [2024-11-26 01:03:02.831425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.831461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.017 [2024-11-26 01:03:02.831474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:40.017 [2024-11-26 01:03:02.831480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.831543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.831551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.017 [2024-11-26 01:03:02.831561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:40.017 [2024-11-26 01:03:02.831567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.831597] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.017 [2024-11-26 01:03:02.831820] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.017 [2024-11-26 01:03:02.831853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.831860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.017 [2024-11-26 01:03:02.831868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:17:40.017 [2024-11-26 01:03:02.831874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.831935] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f113118b-023d-482a-aa5b-dbd9bac20938 00:17:40.017 [2024-11-26 01:03:02.832968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.832998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:40.017 [2024-11-26 01:03:02.833006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:40.017 [2024-11-26 01:03:02.833013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.838268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.838294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.017 [2024-11-26 01:03:02.838304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.171 ms 00:17:40.017 [2024-11-26 01:03:02.838313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.838387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.838395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.017 [2024-11-26 01:03:02.838402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:40.017 [2024-11-26 01:03:02.838417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.838468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.838477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.017 [2024-11-26 01:03:02.838483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:40.017 [2024-11-26 01:03:02.838492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.838524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.017 [2024-11-26 01:03:02.839838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.839870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.017 [2024-11-26 01:03:02.839879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:17:40.017 [2024-11-26 01:03:02.839886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.839926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.839933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.017 [2024-11-26 01:03:02.839943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:40.017 [2024-11-26 01:03:02.839951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.839985] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:40.017 [2024-11-26 01:03:02.840106] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.017 [2024-11-26 01:03:02.840121] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.017 [2024-11-26 01:03:02.840130] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:40.017 [2024-11-26 01:03:02.840143] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.017 [2024-11-26 01:03:02.840151] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.017 [2024-11-26 01:03:02.840159] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:40.017 [2024-11-26 01:03:02.840165] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.017 [2024-11-26 01:03:02.840174] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.017 [2024-11-26 01:03:02.840180] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.017 [2024-11-26 01:03:02.840187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.840193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.017 [2024-11-26 01:03:02.840200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:17:40.017 [2024-11-26 01:03:02.840205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.840281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.017 [2024-11-26 01:03:02.840288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.017 [2024-11-26 01:03:02.840295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:40.017 [2024-11-26 01:03:02.840300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.017 [2024-11-26 01:03:02.840390] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.018 [2024-11-26 01:03:02.840397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.018 [2024-11-26 01:03:02.840405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.018 [2024-11-26 01:03:02.840410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.018 [2024-11-26 01:03:02.840422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:40.018 [2024-11-26 01:03:02.840434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.018 [2024-11-26 01:03:02.840440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.018 [2024-11-26 01:03:02.840452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.018 [2024-11-26 01:03:02.840457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:40.018 [2024-11-26 01:03:02.840466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.018 [2024-11-26 01:03:02.840471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.018 [2024-11-26 01:03:02.840479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:40.018 [2024-11-26 01:03:02.840484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.018 [2024-11-26 01:03:02.840508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:40.018 [2024-11-26 01:03:02.840515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.018 [2024-11-26 01:03:02.840527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.018 [2024-11-26 01:03:02.840539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.018 [2024-11-26 01:03:02.840544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.018 [2024-11-26 01:03:02.840555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.018 [2024-11-26 01:03:02.840561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.018 [2024-11-26 01:03:02.840574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.018 [2024-11-26 01:03:02.840579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.018 [2024-11-26 01:03:02.840590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.018 [2024-11-26 01:03:02.840596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.018 [2024-11-26 01:03:02.840607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.018 [2024-11-26 01:03:02.840612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:40.018 [2024-11-26 01:03:02.840618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.018 [2024-11-26 01:03:02.840622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.018 [2024-11-26 01:03:02.840629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:40.018 [2024-11-26 01:03:02.840634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.018 [2024-11-26 01:03:02.840646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:40.018 [2024-11-26 01:03:02.840652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840657] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.018 [2024-11-26 01:03:02.840665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.018 [2024-11-26 01:03:02.840681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.018 [2024-11-26 01:03:02.840695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.018 [2024-11-26 01:03:02.840708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.018 [2024-11-26 01:03:02.840716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.018 [2024-11-26 01:03:02.840720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.018 [2024-11-26 01:03:02.840727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.018 [2024-11-26 01:03:02.840732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.018 [2024-11-26 01:03:02.840738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.018 [2024-11-26 01:03:02.840745] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.018 [2024-11-26 01:03:02.840754] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.018 [2024-11-26 01:03:02.840760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:40.018 [2024-11-26 01:03:02.840766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:40.018 [2024-11-26 01:03:02.840772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:40.018 [2024-11-26 01:03:02.840779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:40.018 [2024-11-26 01:03:02.840785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:40.018 [2024-11-26 01:03:02.840793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:40.018 [2024-11-26 01:03:02.840798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:40.018 [2024-11-26 01:03:02.840806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:40.018 [2024-11-26 01:03:02.840811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:40.018 [2024-11-26 01:03:02.840817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:40.018 [2024-11-26 01:03:02.840823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:40.018 [2024-11-26 01:03:02.840829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:40.018 [2024-11-26 01:03:02.840835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:40.018 [2024-11-26 01:03:02.840851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:40.018 [2024-11-26 01:03:02.840857] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.018 [2024-11-26 01:03:02.840864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.018 [2024-11-26 01:03:02.840869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.018 [2024-11-26 01:03:02.840876] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.018 [2024-11-26 01:03:02.840881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.018 [2024-11-26 01:03:02.840888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.018 [2024-11-26 01:03:02.840894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.018 [2024-11-26 01:03:02.840902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.018 [2024-11-26 01:03:02.840908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:17:40.018 [2024-11-26 01:03:02.840915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.018 [2024-11-26 01:03:02.840972] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:40.018 [2024-11-26 01:03:02.840983] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:41.918 [2024-11-26 01:03:04.816264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.918 [2024-11-26 01:03:04.816319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:41.918 [2024-11-26 01:03:04.816334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1975.283 ms 00:17:41.918 [2024-11-26 01:03:04.816344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.918 [2024-11-26 01:03:04.824990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.918 [2024-11-26 01:03:04.825029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.918 [2024-11-26 01:03:04.825040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.560 ms 00:17:41.918 [2024-11-26 01:03:04.825051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.918 [2024-11-26 01:03:04.825150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.918 [2024-11-26 01:03:04.825171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:41.918 [2024-11-26 01:03:04.825180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:41.918 [2024-11-26 01:03:04.825189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.843505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.843545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.177 [2024-11-26 01:03:04.843557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.257 ms 00:17:42.177 [2024-11-26 01:03:04.843567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.843615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.843639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.177 [2024-11-26 01:03:04.843647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:42.177 [2024-11-26 01:03:04.843656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.844057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.844087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.177 [2024-11-26 01:03:04.844099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:17:42.177 [2024-11-26 01:03:04.844122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.844250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.844264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.177 [2024-11-26 01:03:04.844272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:42.177 [2024-11-26 01:03:04.844282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.850571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.850615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.177 [2024-11-26 01:03:04.850632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.255 ms 00:17:42.177 [2024-11-26 01:03:04.850646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.861738] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:42.177 [2024-11-26 01:03:04.876343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.876369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:42.177 [2024-11-26 01:03:04.876381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.575 ms 00:17:42.177 [2024-11-26 01:03:04.876391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.910159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.910190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:42.177 [2024-11-26 01:03:04.910205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.727 ms 00:17:42.177 [2024-11-26 01:03:04.910213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.910407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.910418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:42.177 [2024-11-26 01:03:04.910438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:17:42.177 [2024-11-26 01:03:04.910446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.913125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.913152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:42.177 [2024-11-26 01:03:04.913163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:17:42.177 [2024-11-26 01:03:04.913171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.915419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.915446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:42.177 [2024-11-26 01:03:04.915457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:17:42.177 [2024-11-26 01:03:04.915464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.915760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.915773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:42.177 [2024-11-26 01:03:04.915785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:17:42.177 [2024-11-26 01:03:04.915792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.935401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.935429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:42.177 [2024-11-26 01:03:04.935451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.566 ms 00:17:42.177 [2024-11-26 01:03:04.935459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.939184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.939212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:42.177 [2024-11-26 01:03:04.939223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.650 ms 00:17:42.177 [2024-11-26 01:03:04.939231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.941875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.941901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:42.177 [2024-11-26 01:03:04.941911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.580 ms 00:17:42.177 [2024-11-26 01:03:04.941918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.944729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.944755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:42.177 [2024-11-26 01:03:04.944768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.764 ms 00:17:42.177 [2024-11-26 01:03:04.944775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.944819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.944828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:42.177 [2024-11-26 01:03:04.944850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:42.177 [2024-11-26 01:03:04.944858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.944943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.177 [2024-11-26 01:03:04.944952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:42.177 [2024-11-26 01:03:04.944962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:42.177 [2024-11-26 01:03:04.944969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.177 [2024-11-26 01:03:04.945903] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2114.035 ms, result 0 00:17:42.177 { 00:17:42.177 "name": "ftl0", 00:17:42.177 "uuid": "f113118b-023d-482a-aa5b-dbd9bac20938" 00:17:42.177 } 00:17:42.177 01:03:04 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:42.177 01:03:04 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:42.177 01:03:04 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:42.177 01:03:04 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:42.177 01:03:04 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:42.177 01:03:04 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:42.177 01:03:04 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:42.436 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:42.694 [ 00:17:42.694 { 00:17:42.694 "name": "ftl0", 00:17:42.694 "aliases": [ 00:17:42.694 "f113118b-023d-482a-aa5b-dbd9bac20938" 00:17:42.694 ], 00:17:42.694 "product_name": "FTL disk", 00:17:42.694 "block_size": 4096, 00:17:42.694 "num_blocks": 20971520, 00:17:42.694 "uuid": "f113118b-023d-482a-aa5b-dbd9bac20938", 00:17:42.694 "assigned_rate_limits": { 00:17:42.694 "rw_ios_per_sec": 0, 00:17:42.694 "rw_mbytes_per_sec": 0, 00:17:42.694 "r_mbytes_per_sec": 0, 00:17:42.694 "w_mbytes_per_sec": 0 00:17:42.694 }, 00:17:42.694 "claimed": false, 00:17:42.694 "zoned": false, 00:17:42.694 "supported_io_types": { 00:17:42.694 "read": true, 00:17:42.694 "write": true, 00:17:42.694 "unmap": true, 00:17:42.694 "flush": true, 00:17:42.694 "reset": false, 00:17:42.694 "nvme_admin": false, 00:17:42.694 "nvme_io": false, 00:17:42.694 "nvme_io_md": false, 00:17:42.694 "write_zeroes": true, 00:17:42.694 "zcopy": false, 00:17:42.694 "get_zone_info": false, 00:17:42.694 "zone_management": false, 00:17:42.694 "zone_append": false, 00:17:42.694 "compare": false, 00:17:42.694 "compare_and_write": false, 00:17:42.694 "abort": false, 00:17:42.694 "seek_hole": false, 00:17:42.694 "seek_data": false, 00:17:42.694 "copy": false, 00:17:42.694 "nvme_iov_md": false 00:17:42.694 }, 00:17:42.694 "driver_specific": { 00:17:42.694 "ftl": { 00:17:42.694 "base_bdev": "d9544f81-79b7-4856-9a8d-228615a3ad53", 00:17:42.694 "cache": "nvc0n1p0" 00:17:42.694 } 00:17:42.694 } 00:17:42.694 } 00:17:42.694 ] 00:17:42.694 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:42.694 01:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:42.694 01:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:42.694 01:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:42.694 01:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:42.955 [2024-11-26 01:03:05.743215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.743253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:42.955 [2024-11-26 01:03:05.743265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:42.955 [2024-11-26 01:03:05.743276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.743310] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:42.955 [2024-11-26 01:03:05.743780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.743796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:42.955 [2024-11-26 01:03:05.743807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:17:42.955 [2024-11-26 01:03:05.743815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.744336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.744353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:42.955 [2024-11-26 01:03:05.744378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:17:42.955 [2024-11-26 01:03:05.744387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.747641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.747660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:42.955 [2024-11-26 01:03:05.747671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:17:42.955 [2024-11-26 01:03:05.747682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.753798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.753820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:42.955 [2024-11-26 01:03:05.753833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.086 ms 00:17:42.955 [2024-11-26 01:03:05.753848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.755263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.755291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:42.955 [2024-11-26 01:03:05.755302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.315 ms 00:17:42.955 [2024-11-26 01:03:05.755309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.759093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.759126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:42.955 [2024-11-26 01:03:05.759137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.742 ms 00:17:42.955 [2024-11-26 01:03:05.759158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.759336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.759350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:42.955 [2024-11-26 01:03:05.759368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:17:42.955 [2024-11-26 01:03:05.759376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.760688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.760714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:42.955 [2024-11-26 01:03:05.760723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:17:42.955 [2024-11-26 01:03:05.760730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.761777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.761802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:42.955 [2024-11-26 01:03:05.761814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:17:42.955 [2024-11-26 01:03:05.761821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.762643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.762669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:42.955 [2024-11-26 01:03:05.762678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:17:42.955 [2024-11-26 01:03:05.762685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.763551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.955 [2024-11-26 01:03:05.763577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:42.955 [2024-11-26 01:03:05.763587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:17:42.955 [2024-11-26 01:03:05.763594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.955 [2024-11-26 01:03:05.763635] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:42.955 [2024-11-26 01:03:05.763648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:42.955 [2024-11-26 01:03:05.763789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.763999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:42.956 [2024-11-26 01:03:05.764522] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:42.956 [2024-11-26 01:03:05.764532] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f113118b-023d-482a-aa5b-dbd9bac20938 00:17:42.956 [2024-11-26 01:03:05.764540] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:42.956 [2024-11-26 01:03:05.764549] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:42.956 [2024-11-26 01:03:05.764556] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:42.957 [2024-11-26 01:03:05.764564] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:42.957 [2024-11-26 01:03:05.764581] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:42.957 [2024-11-26 01:03:05.764590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:42.957 [2024-11-26 01:03:05.764597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:42.957 [2024-11-26 01:03:05.764604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:42.957 [2024-11-26 01:03:05.764611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:42.957 [2024-11-26 01:03:05.764619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.957 [2024-11-26 01:03:05.764627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:42.957 [2024-11-26 01:03:05.764636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:17:42.957 [2024-11-26 01:03:05.764645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.766210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.957 [2024-11-26 01:03:05.766226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:42.957 [2024-11-26 01:03:05.766238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.523 ms 00:17:42.957 [2024-11-26 01:03:05.766245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.766340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.957 [2024-11-26 01:03:05.766359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:42.957 [2024-11-26 01:03:05.766372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:42.957 [2024-11-26 01:03:05.766380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.771792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.771822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.957 [2024-11-26 01:03:05.771833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.771852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.771916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.771924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.957 [2024-11-26 01:03:05.771935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.771942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.772007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.772017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.957 [2024-11-26 01:03:05.772026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.772034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.772062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.772069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.957 [2024-11-26 01:03:05.772078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.772087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.781483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.781519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.957 [2024-11-26 01:03:05.781531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.781539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.789406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.789448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.957 [2024-11-26 01:03:05.789459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.789468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.789546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.789556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:42.957 [2024-11-26 01:03:05.789566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.789573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.789634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.789643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:42.957 [2024-11-26 01:03:05.789653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.789660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.789745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.789755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:42.957 [2024-11-26 01:03:05.789764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.789771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.789820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.789830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:42.957 [2024-11-26 01:03:05.789906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.789915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.789958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.789966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:42.957 [2024-11-26 01:03:05.789975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.789982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.790044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:42.957 [2024-11-26 01:03:05.790054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:42.957 [2024-11-26 01:03:05.790072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:42.957 [2024-11-26 01:03:05.790086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.957 [2024-11-26 01:03:05.790254] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.005 ms, result 0 00:17:42.957 true 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 88035 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 88035 ']' 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 88035 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88035 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.957 killing process with pid 88035 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88035' 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 88035 00:17:42.957 01:03:05 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 88035 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:48.220 01:03:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:48.221 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:48.221 fio-3.35 00:17:48.221 Starting 1 thread 00:17:52.424 00:17:52.424 test: (groupid=0, jobs=1): err= 0: pid=88186: Tue Nov 26 01:03:14 2024 00:17:52.424 read: IOPS=1020, BW=67.8MiB/s (71.1MB/s)(255MiB/3755msec) 00:17:52.424 slat (nsec): min=3029, max=23145, avg=4951.47, stdev=2057.13 00:17:52.424 clat (usec): min=227, max=1321, avg=441.04, stdev=119.71 00:17:52.424 lat (usec): min=231, max=1332, avg=445.99, stdev=120.32 00:17:52.424 clat percentiles (usec): 00:17:52.424 | 1.00th=[ 285], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 318], 00:17:52.424 | 30.00th=[ 330], 40.00th=[ 408], 50.00th=[ 457], 60.00th=[ 482], 00:17:52.424 | 70.00th=[ 510], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 611], 00:17:52.424 | 99.00th=[ 848], 99.50th=[ 906], 99.90th=[ 1106], 99.95th=[ 1254], 00:17:52.424 | 99.99th=[ 1319] 00:17:52.424 write: IOPS=1027, BW=68.3MiB/s (71.6MB/s)(256MiB/3751msec); 0 zone resets 00:17:52.424 slat (nsec): min=13714, max=94095, avg=19543.51, stdev=4282.45 00:17:52.424 clat (usec): min=271, max=1368, avg=498.85, stdev=146.58 00:17:52.424 lat (usec): min=300, max=1386, avg=518.39, stdev=147.42 00:17:52.424 clat percentiles (usec): 00:17:52.424 | 1.00th=[ 302], 5.00th=[ 310], 10.00th=[ 334], 20.00th=[ 347], 00:17:52.424 | 30.00th=[ 367], 40.00th=[ 482], 50.00th=[ 515], 60.00th=[ 553], 00:17:52.424 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 635], 95.00th=[ 709], 00:17:52.424 | 99.00th=[ 1004], 99.50th=[ 1106], 99.90th=[ 1287], 99.95th=[ 1319], 00:17:52.424 | 99.99th=[ 1369] 00:17:52.424 bw ( KiB/s): min=59840, max=92616, per=100.00%, avg=70331.43, stdev=13657.55, samples=7 00:17:52.424 iops : min= 880, max= 1362, avg=1034.29, stdev=200.85, samples=7 00:17:52.424 lat (usec) : 250=0.04%, 500=56.87%, 750=39.95%, 1000=2.50% 00:17:52.424 lat (msec) : 2=0.64% 00:17:52.424 cpu : usr=99.28%, sys=0.03%, ctx=9, majf=0, minf=1181 00:17:52.424 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.424 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.424 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.424 00:17:52.424 Run status group 0 (all jobs): 00:17:52.424 READ: bw=67.8MiB/s (71.1MB/s), 67.8MiB/s-67.8MiB/s (71.1MB/s-71.1MB/s), io=255MiB (267MB), run=3755-3755msec 00:17:52.424 WRITE: bw=68.3MiB/s (71.6MB/s), 68.3MiB/s-68.3MiB/s (71.6MB/s-71.6MB/s), io=256MiB (269MB), run=3751-3751msec 00:17:52.685 ----------------------------------------------------- 00:17:52.685 Suppressions used: 00:17:52.685 count bytes template 00:17:52.685 1 5 /usr/src/fio/parse.c 00:17:52.685 1 8 libtcmalloc_minimal.so 00:17:52.685 1 904 libcrypto.so 00:17:52.685 ----------------------------------------------------- 00:17:52.685 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:52.685 01:03:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:52.946 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:52.946 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:52.946 fio-3.35 00:17:52.946 Starting 2 threads 00:18:19.510 00:18:19.510 first_half: (groupid=0, jobs=1): err= 0: pid=88278: Tue Nov 26 01:03:41 2024 00:18:19.510 read: IOPS=2640, BW=10.3MiB/s (10.8MB/s)(255MiB/24710msec) 00:18:19.510 slat (usec): min=2, max=134, avg= 5.09, stdev= 1.71 00:18:19.510 clat (usec): min=564, max=423576, avg=39017.78, stdev=24220.83 00:18:19.510 lat (usec): min=569, max=423582, avg=39022.88, stdev=24220.89 00:18:19.510 clat percentiles (msec): 00:18:19.510 | 1.00th=[ 10], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:18:19.510 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 36], 00:18:19.510 | 70.00th=[ 38], 80.00th=[ 40], 90.00th=[ 46], 95.00th=[ 64], 00:18:19.510 | 99.00th=[ 163], 99.50th=[ 184], 99.90th=[ 275], 99.95th=[ 305], 00:18:19.510 | 99.99th=[ 393] 00:18:19.510 write: IOPS=3333, BW=13.0MiB/s (13.7MB/s)(256MiB/19661msec); 0 zone resets 00:18:19.510 slat (usec): min=3, max=2756, avg= 6.90, stdev=15.11 00:18:19.510 clat (usec): min=361, max=84545, avg=9396.38, stdev=14710.48 00:18:19.510 lat (usec): min=375, max=84551, avg=9403.28, stdev=14710.57 00:18:19.510 clat percentiles (usec): 00:18:19.510 | 1.00th=[ 701], 5.00th=[ 832], 10.00th=[ 988], 20.00th=[ 1270], 00:18:19.510 | 30.00th=[ 2278], 40.00th=[ 3490], 50.00th=[ 4817], 60.00th=[ 5800], 00:18:19.510 | 70.00th=[ 7111], 80.00th=[12780], 90.00th=[19006], 95.00th=[57934], 00:18:19.510 | 99.00th=[69731], 99.50th=[71828], 99.90th=[82314], 99.95th=[83362], 00:18:19.510 | 99.99th=[84411] 00:18:19.510 bw ( KiB/s): min= 152, max=51352, per=100.00%, avg=24966.10, stdev=16536.23, samples=21 00:18:19.510 iops : min= 38, max=12838, avg=6241.52, stdev=4134.06, samples=21 00:18:19.510 lat (usec) : 500=0.01%, 750=1.18%, 1000=4.04% 00:18:19.510 lat (msec) : 2=9.02%, 4=8.11%, 10=16.46%, 20=7.48%, 50=46.92% 00:18:19.510 lat (msec) : 100=5.31%, 250=1.35%, 500=0.12% 00:18:19.510 cpu : usr=98.85%, sys=0.29%, ctx=119, majf=0, minf=5587 00:18:19.511 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:19.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.511 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.511 issued rwts: total=65236,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.511 second_half: (groupid=0, jobs=1): err= 0: pid=88279: Tue Nov 26 01:03:41 2024 00:18:19.511 read: IOPS=2618, BW=10.2MiB/s (10.7MB/s)(255MiB/24968msec) 00:18:19.511 slat (nsec): min=2947, max=64643, avg=4314.97, stdev=1121.76 00:18:19.511 clat (usec): min=602, max=435854, avg=38034.18, stdev=24646.72 00:18:19.511 lat (usec): min=606, max=435860, avg=38038.50, stdev=24646.84 00:18:19.511 clat percentiles (msec): 00:18:19.511 | 1.00th=[ 13], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:18:19.511 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 36], 00:18:19.511 | 70.00th=[ 37], 80.00th=[ 40], 90.00th=[ 44], 95.00th=[ 59], 00:18:19.511 | 99.00th=[ 163], 99.50th=[ 188], 99.90th=[ 292], 99.95th=[ 380], 00:18:19.511 | 99.99th=[ 435] 00:18:19.511 write: IOPS=2752, BW=10.8MiB/s (11.3MB/s)(256MiB/23806msec); 0 zone resets 00:18:19.511 slat (usec): min=3, max=1053, avg= 5.88, stdev= 7.48 00:18:19.511 clat (usec): min=336, max=85118, avg=10809.40, stdev=15277.24 00:18:19.511 lat (usec): min=341, max=85122, avg=10815.28, stdev=15277.40 00:18:19.511 clat percentiles (usec): 00:18:19.511 | 1.00th=[ 660], 5.00th=[ 791], 10.00th=[ 1045], 20.00th=[ 1565], 00:18:19.511 | 30.00th=[ 3195], 40.00th=[ 4686], 50.00th=[ 5866], 60.00th=[ 6849], 00:18:19.511 | 70.00th=[ 9503], 80.00th=[14615], 90.00th=[23462], 95.00th=[58983], 00:18:19.511 | 99.00th=[70779], 99.50th=[72877], 99.90th=[80217], 99.95th=[83362], 00:18:19.511 | 99.99th=[84411] 00:18:19.511 bw ( KiB/s): min= 928, max=50368, per=95.22%, avg=20971.52, stdev=12234.01, samples=25 00:18:19.511 iops : min= 232, max=12592, avg=5242.88, stdev=3058.50, samples=25 00:18:19.511 lat (usec) : 500=0.04%, 750=1.86%, 1000=2.68% 00:18:19.511 lat (msec) : 2=7.45%, 4=6.22%, 10=17.57%, 20=10.18%, 50=47.79% 00:18:19.511 lat (msec) : 100=4.96%, 250=1.14%, 500=0.11% 00:18:19.511 cpu : usr=99.32%, sys=0.12%, ctx=76, majf=0, minf=5551 00:18:19.511 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:19.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.511 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:19.511 issued rwts: total=65373,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:19.511 00:18:19.511 Run status group 0 (all jobs): 00:18:19.511 READ: bw=20.4MiB/s (21.4MB/s), 10.2MiB/s-10.3MiB/s (10.7MB/s-10.8MB/s), io=510MiB (535MB), run=24710-24968msec 00:18:19.511 WRITE: bw=21.5MiB/s (22.6MB/s), 10.8MiB/s-13.0MiB/s (11.3MB/s-13.7MB/s), io=512MiB (537MB), run=19661-23806msec 00:18:19.773 ----------------------------------------------------- 00:18:19.773 Suppressions used: 00:18:19.773 count bytes template 00:18:19.773 2 10 /usr/src/fio/parse.c 00:18:19.773 2 192 /usr/src/fio/iolog.c 00:18:19.773 1 8 libtcmalloc_minimal.so 00:18:19.773 1 904 libcrypto.so 00:18:19.773 ----------------------------------------------------- 00:18:19.773 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:19.773 01:03:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:20.034 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:20.034 fio-3.35 00:18:20.034 Starting 1 thread 00:18:38.138 00:18:38.138 test: (groupid=0, jobs=1): err= 0: pid=88591: Tue Nov 26 01:03:57 2024 00:18:38.138 read: IOPS=7906, BW=30.9MiB/s (32.4MB/s)(255MiB/8247msec) 00:18:38.138 slat (nsec): min=3041, max=27023, avg=4735.56, stdev=1075.69 00:18:38.138 clat (usec): min=517, max=31898, avg=16181.38, stdev=1839.38 00:18:38.138 lat (usec): min=522, max=31903, avg=16186.11, stdev=1839.40 00:18:38.138 clat percentiles (usec): 00:18:38.138 | 1.00th=[14091], 5.00th=[14353], 10.00th=[15139], 20.00th=[15270], 00:18:38.138 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15795], 60.00th=[16057], 00:18:38.138 | 70.00th=[16188], 80.00th=[16319], 90.00th=[17695], 95.00th=[20055], 00:18:38.138 | 99.00th=[24511], 99.50th=[25297], 99.90th=[30016], 99.95th=[31327], 00:18:38.138 | 99.99th=[31851] 00:18:38.138 write: IOPS=10.8k, BW=42.1MiB/s (44.2MB/s)(256MiB/6078msec); 0 zone resets 00:18:38.138 slat (usec): min=4, max=318, avg= 7.90, stdev= 4.12 00:18:38.138 clat (usec): min=489, max=64200, avg=11818.71, stdev=12302.86 00:18:38.138 lat (usec): min=494, max=64206, avg=11826.61, stdev=12303.08 00:18:38.138 clat percentiles (usec): 00:18:38.139 | 1.00th=[ 685], 5.00th=[ 873], 10.00th=[ 1020], 20.00th=[ 1221], 00:18:38.139 | 30.00th=[ 1418], 40.00th=[ 2409], 50.00th=[10028], 60.00th=[12256], 00:18:38.139 | 70.00th=[14877], 80.00th=[17433], 90.00th=[35390], 95.00th=[37487], 00:18:38.139 | 99.00th=[47973], 99.50th=[52167], 99.90th=[60031], 99.95th=[61080], 00:18:38.139 | 99.99th=[62653] 00:18:38.139 bw ( KiB/s): min= 5872, max=57328, per=93.49%, avg=40322.69, stdev=12492.97, samples=13 00:18:38.139 iops : min= 1468, max=14332, avg=10080.62, stdev=3123.21, samples=13 00:18:38.139 lat (usec) : 500=0.01%, 750=1.08%, 1000=3.57% 00:18:38.139 lat (msec) : 2=14.66%, 4=1.65%, 10=4.19%, 20=63.78%, 50=10.68% 00:18:38.139 lat (msec) : 100=0.40% 00:18:38.139 cpu : usr=98.90%, sys=0.27%, ctx=31, majf=0, minf=5577 00:18:38.139 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:38.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.139 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:38.139 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.139 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:38.139 00:18:38.139 Run status group 0 (all jobs): 00:18:38.139 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=255MiB (267MB), run=8247-8247msec 00:18:38.139 WRITE: bw=42.1MiB/s (44.2MB/s), 42.1MiB/s-42.1MiB/s (44.2MB/s-44.2MB/s), io=256MiB (268MB), run=6078-6078msec 00:18:38.139 ----------------------------------------------------- 00:18:38.139 Suppressions used: 00:18:38.139 count bytes template 00:18:38.139 1 5 /usr/src/fio/parse.c 00:18:38.139 2 192 /usr/src/fio/iolog.c 00:18:38.139 1 8 libtcmalloc_minimal.so 00:18:38.139 1 904 libcrypto.so 00:18:38.139 ----------------------------------------------------- 00:18:38.139 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:38.139 Remove shared memory files 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70995 /dev/shm/spdk_tgt_trace.pid86972 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:38.139 00:18:38.139 real 0m59.949s 00:18:38.139 user 2m14.641s 00:18:38.139 sys 0m2.857s 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:38.139 ************************************ 00:18:38.139 END TEST ftl_fio_basic 00:18:38.139 ************************************ 00:18:38.139 01:03:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:38.139 01:03:59 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:38.139 01:03:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:38.139 01:03:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:38.139 01:03:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:38.139 ************************************ 00:18:38.139 START TEST ftl_bdevperf 00:18:38.139 ************************************ 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:38.139 * Looking for test storage... 00:18:38.139 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:38.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.139 --rc genhtml_branch_coverage=1 00:18:38.139 --rc genhtml_function_coverage=1 00:18:38.139 --rc genhtml_legend=1 00:18:38.139 --rc geninfo_all_blocks=1 00:18:38.139 --rc geninfo_unexecuted_blocks=1 00:18:38.139 00:18:38.139 ' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:38.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.139 --rc genhtml_branch_coverage=1 00:18:38.139 --rc genhtml_function_coverage=1 00:18:38.139 --rc genhtml_legend=1 00:18:38.139 --rc geninfo_all_blocks=1 00:18:38.139 --rc geninfo_unexecuted_blocks=1 00:18:38.139 00:18:38.139 ' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:38.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.139 --rc genhtml_branch_coverage=1 00:18:38.139 --rc genhtml_function_coverage=1 00:18:38.139 --rc genhtml_legend=1 00:18:38.139 --rc geninfo_all_blocks=1 00:18:38.139 --rc geninfo_unexecuted_blocks=1 00:18:38.139 00:18:38.139 ' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:38.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:38.139 --rc genhtml_branch_coverage=1 00:18:38.139 --rc genhtml_function_coverage=1 00:18:38.139 --rc genhtml_legend=1 00:18:38.139 --rc geninfo_all_blocks=1 00:18:38.139 --rc geninfo_unexecuted_blocks=1 00:18:38.139 00:18:38.139 ' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:38.139 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=88835 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 88835 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 88835 ']' 00:18:38.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.140 01:03:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:38.140 [2024-11-26 01:03:59.464631] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:18:38.140 [2024-11-26 01:03:59.464754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88835 ] 00:18:38.140 [2024-11-26 01:03:59.599329] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:38.140 [2024-11-26 01:03:59.629255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.140 [2024-11-26 01:03:59.656491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:38.140 { 00:18:38.140 "name": "nvme0n1", 00:18:38.140 "aliases": [ 00:18:38.140 "3d3618b2-dbc1-4da7-9237-8034cc827bc8" 00:18:38.140 ], 00:18:38.140 "product_name": "NVMe disk", 00:18:38.140 "block_size": 4096, 00:18:38.140 "num_blocks": 1310720, 00:18:38.140 "uuid": "3d3618b2-dbc1-4da7-9237-8034cc827bc8", 00:18:38.140 "numa_id": -1, 00:18:38.140 "assigned_rate_limits": { 00:18:38.140 "rw_ios_per_sec": 0, 00:18:38.140 "rw_mbytes_per_sec": 0, 00:18:38.140 "r_mbytes_per_sec": 0, 00:18:38.140 "w_mbytes_per_sec": 0 00:18:38.140 }, 00:18:38.140 "claimed": true, 00:18:38.140 "claim_type": "read_many_write_one", 00:18:38.140 "zoned": false, 00:18:38.140 "supported_io_types": { 00:18:38.140 "read": true, 00:18:38.140 "write": true, 00:18:38.140 "unmap": true, 00:18:38.140 "flush": true, 00:18:38.140 "reset": true, 00:18:38.140 "nvme_admin": true, 00:18:38.140 "nvme_io": true, 00:18:38.140 "nvme_io_md": false, 00:18:38.140 "write_zeroes": true, 00:18:38.140 "zcopy": false, 00:18:38.140 "get_zone_info": false, 00:18:38.140 "zone_management": false, 00:18:38.140 "zone_append": false, 00:18:38.140 "compare": true, 00:18:38.140 "compare_and_write": false, 00:18:38.140 "abort": true, 00:18:38.140 "seek_hole": false, 00:18:38.140 "seek_data": false, 00:18:38.140 "copy": true, 00:18:38.140 "nvme_iov_md": false 00:18:38.140 }, 00:18:38.140 "driver_specific": { 00:18:38.140 "nvme": [ 00:18:38.140 { 00:18:38.140 "pci_address": "0000:00:11.0", 00:18:38.140 "trid": { 00:18:38.140 "trtype": "PCIe", 00:18:38.140 "traddr": "0000:00:11.0" 00:18:38.140 }, 00:18:38.140 "ctrlr_data": { 00:18:38.140 "cntlid": 0, 00:18:38.140 "vendor_id": "0x1b36", 00:18:38.140 "model_number": "QEMU NVMe Ctrl", 00:18:38.140 "serial_number": "12341", 00:18:38.140 "firmware_revision": "8.0.0", 00:18:38.140 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:38.140 "oacs": { 00:18:38.140 "security": 0, 00:18:38.140 "format": 1, 00:18:38.140 "firmware": 0, 00:18:38.140 "ns_manage": 1 00:18:38.140 }, 00:18:38.140 "multi_ctrlr": false, 00:18:38.140 "ana_reporting": false 00:18:38.140 }, 00:18:38.140 "vs": { 00:18:38.140 "nvme_version": "1.4" 00:18:38.140 }, 00:18:38.140 "ns_data": { 00:18:38.140 "id": 1, 00:18:38.140 "can_share": false 00:18:38.140 } 00:18:38.140 } 00:18:38.140 ], 00:18:38.140 "mp_policy": "active_passive" 00:18:38.140 } 00:18:38.140 } 00:18:38.140 ]' 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=a2ee783d-d641-4b62-9ab6-93052f13e85b 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:38.140 01:04:00 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a2ee783d-d641-4b62-9ab6-93052f13e85b 00:18:38.397 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:38.397 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=d5e6b1e6-e607-492d-91c3-2384db859c29 00:18:38.397 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d5e6b1e6-e607-492d-91c3-2384db859c29 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:38.654 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:38.654 { 00:18:38.654 "name": "9210092a-637e-4e20-9ae7-f48e7a41638a", 00:18:38.654 "aliases": [ 00:18:38.654 "lvs/nvme0n1p0" 00:18:38.654 ], 00:18:38.654 "product_name": "Logical Volume", 00:18:38.654 "block_size": 4096, 00:18:38.654 "num_blocks": 26476544, 00:18:38.654 "uuid": "9210092a-637e-4e20-9ae7-f48e7a41638a", 00:18:38.654 "assigned_rate_limits": { 00:18:38.654 "rw_ios_per_sec": 0, 00:18:38.654 "rw_mbytes_per_sec": 0, 00:18:38.654 "r_mbytes_per_sec": 0, 00:18:38.654 "w_mbytes_per_sec": 0 00:18:38.654 }, 00:18:38.654 "claimed": false, 00:18:38.654 "zoned": false, 00:18:38.654 "supported_io_types": { 00:18:38.654 "read": true, 00:18:38.654 "write": true, 00:18:38.654 "unmap": true, 00:18:38.654 "flush": false, 00:18:38.654 "reset": true, 00:18:38.654 "nvme_admin": false, 00:18:38.654 "nvme_io": false, 00:18:38.654 "nvme_io_md": false, 00:18:38.654 "write_zeroes": true, 00:18:38.655 "zcopy": false, 00:18:38.655 "get_zone_info": false, 00:18:38.655 "zone_management": false, 00:18:38.655 "zone_append": false, 00:18:38.655 "compare": false, 00:18:38.655 "compare_and_write": false, 00:18:38.655 "abort": false, 00:18:38.655 "seek_hole": true, 00:18:38.655 "seek_data": true, 00:18:38.655 "copy": false, 00:18:38.655 "nvme_iov_md": false 00:18:38.655 }, 00:18:38.655 "driver_specific": { 00:18:38.655 "lvol": { 00:18:38.655 "lvol_store_uuid": "d5e6b1e6-e607-492d-91c3-2384db859c29", 00:18:38.655 "base_bdev": "nvme0n1", 00:18:38.655 "thin_provision": true, 00:18:38.655 "num_allocated_clusters": 0, 00:18:38.655 "snapshot": false, 00:18:38.655 "clone": false, 00:18:38.655 "esnap_clone": false 00:18:38.655 } 00:18:38.655 } 00:18:38.655 } 00:18:38.655 ]' 00:18:38.915 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:38.915 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:38.915 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:38.915 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:38.915 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:38.915 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:38.915 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:38.915 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:38.915 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:39.176 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:39.176 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:39.176 01:04:01 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:39.176 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:39.176 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:39.176 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:39.176 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:39.176 01:04:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:39.176 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:39.176 { 00:18:39.176 "name": "9210092a-637e-4e20-9ae7-f48e7a41638a", 00:18:39.176 "aliases": [ 00:18:39.176 "lvs/nvme0n1p0" 00:18:39.176 ], 00:18:39.176 "product_name": "Logical Volume", 00:18:39.176 "block_size": 4096, 00:18:39.176 "num_blocks": 26476544, 00:18:39.176 "uuid": "9210092a-637e-4e20-9ae7-f48e7a41638a", 00:18:39.176 "assigned_rate_limits": { 00:18:39.176 "rw_ios_per_sec": 0, 00:18:39.176 "rw_mbytes_per_sec": 0, 00:18:39.176 "r_mbytes_per_sec": 0, 00:18:39.176 "w_mbytes_per_sec": 0 00:18:39.176 }, 00:18:39.176 "claimed": false, 00:18:39.176 "zoned": false, 00:18:39.176 "supported_io_types": { 00:18:39.176 "read": true, 00:18:39.176 "write": true, 00:18:39.176 "unmap": true, 00:18:39.176 "flush": false, 00:18:39.176 "reset": true, 00:18:39.176 "nvme_admin": false, 00:18:39.176 "nvme_io": false, 00:18:39.176 "nvme_io_md": false, 00:18:39.176 "write_zeroes": true, 00:18:39.176 "zcopy": false, 00:18:39.176 "get_zone_info": false, 00:18:39.176 "zone_management": false, 00:18:39.176 "zone_append": false, 00:18:39.176 "compare": false, 00:18:39.176 "compare_and_write": false, 00:18:39.176 "abort": false, 00:18:39.176 "seek_hole": true, 00:18:39.176 "seek_data": true, 00:18:39.176 "copy": false, 00:18:39.176 "nvme_iov_md": false 00:18:39.176 }, 00:18:39.176 "driver_specific": { 00:18:39.176 "lvol": { 00:18:39.176 "lvol_store_uuid": "d5e6b1e6-e607-492d-91c3-2384db859c29", 00:18:39.176 "base_bdev": "nvme0n1", 00:18:39.176 "thin_provision": true, 00:18:39.176 "num_allocated_clusters": 0, 00:18:39.176 "snapshot": false, 00:18:39.176 "clone": false, 00:18:39.176 "esnap_clone": false 00:18:39.176 } 00:18:39.176 } 00:18:39.176 } 00:18:39.176 ]' 00:18:39.176 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:39.437 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9210092a-637e-4e20-9ae7-f48e7a41638a 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:39.698 { 00:18:39.698 "name": "9210092a-637e-4e20-9ae7-f48e7a41638a", 00:18:39.698 "aliases": [ 00:18:39.698 "lvs/nvme0n1p0" 00:18:39.698 ], 00:18:39.698 "product_name": "Logical Volume", 00:18:39.698 "block_size": 4096, 00:18:39.698 "num_blocks": 26476544, 00:18:39.698 "uuid": "9210092a-637e-4e20-9ae7-f48e7a41638a", 00:18:39.698 "assigned_rate_limits": { 00:18:39.698 "rw_ios_per_sec": 0, 00:18:39.698 "rw_mbytes_per_sec": 0, 00:18:39.698 "r_mbytes_per_sec": 0, 00:18:39.698 "w_mbytes_per_sec": 0 00:18:39.698 }, 00:18:39.698 "claimed": false, 00:18:39.698 "zoned": false, 00:18:39.698 "supported_io_types": { 00:18:39.698 "read": true, 00:18:39.698 "write": true, 00:18:39.698 "unmap": true, 00:18:39.698 "flush": false, 00:18:39.698 "reset": true, 00:18:39.698 "nvme_admin": false, 00:18:39.698 "nvme_io": false, 00:18:39.698 "nvme_io_md": false, 00:18:39.698 "write_zeroes": true, 00:18:39.698 "zcopy": false, 00:18:39.698 "get_zone_info": false, 00:18:39.698 "zone_management": false, 00:18:39.698 "zone_append": false, 00:18:39.698 "compare": false, 00:18:39.698 "compare_and_write": false, 00:18:39.698 "abort": false, 00:18:39.698 "seek_hole": true, 00:18:39.698 "seek_data": true, 00:18:39.698 "copy": false, 00:18:39.698 "nvme_iov_md": false 00:18:39.698 }, 00:18:39.698 "driver_specific": { 00:18:39.698 "lvol": { 00:18:39.698 "lvol_store_uuid": "d5e6b1e6-e607-492d-91c3-2384db859c29", 00:18:39.698 "base_bdev": "nvme0n1", 00:18:39.698 "thin_provision": true, 00:18:39.698 "num_allocated_clusters": 0, 00:18:39.698 "snapshot": false, 00:18:39.698 "clone": false, 00:18:39.698 "esnap_clone": false 00:18:39.698 } 00:18:39.698 } 00:18:39.698 } 00:18:39.698 ]' 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:39.698 01:04:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:39.959 01:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:39.959 01:04:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9210092a-637e-4e20-9ae7-f48e7a41638a -c nvc0n1p0 --l2p_dram_limit 20 00:18:39.959 [2024-11-26 01:04:02.801530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.959 [2024-11-26 01:04:02.801575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:39.959 [2024-11-26 01:04:02.801587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:39.959 [2024-11-26 01:04:02.801595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.959 [2024-11-26 01:04:02.801631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.959 [2024-11-26 01:04:02.801642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.959 [2024-11-26 01:04:02.801649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:39.959 [2024-11-26 01:04:02.801658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.959 [2024-11-26 01:04:02.801671] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:39.959 [2024-11-26 01:04:02.801859] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:39.959 [2024-11-26 01:04:02.801872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.959 [2024-11-26 01:04:02.801883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.959 [2024-11-26 01:04:02.801890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:18:39.959 [2024-11-26 01:04:02.801898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.959 [2024-11-26 01:04:02.801920] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a2d792b7-05aa-44fd-a1bf-fe25489dce4f 00:18:39.959 [2024-11-26 01:04:02.803208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.959 [2024-11-26 01:04:02.803236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:39.959 [2024-11-26 01:04:02.803249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:39.959 [2024-11-26 01:04:02.803257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.959 [2024-11-26 01:04:02.810050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.959 [2024-11-26 01:04:02.810080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.959 [2024-11-26 01:04:02.810093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.758 ms 00:18:39.960 [2024-11-26 01:04:02.810099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.960 [2024-11-26 01:04:02.810200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.960 [2024-11-26 01:04:02.810209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.960 [2024-11-26 01:04:02.810224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:39.960 [2024-11-26 01:04:02.810230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.960 [2024-11-26 01:04:02.810267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.960 [2024-11-26 01:04:02.810275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:39.960 [2024-11-26 01:04:02.810283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:39.960 [2024-11-26 01:04:02.810289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.960 [2024-11-26 01:04:02.810307] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:39.960 [2024-11-26 01:04:02.811933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.960 [2024-11-26 01:04:02.811964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.960 [2024-11-26 01:04:02.811972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.632 ms 00:18:39.960 [2024-11-26 01:04:02.811981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.960 [2024-11-26 01:04:02.812007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.960 [2024-11-26 01:04:02.812018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:39.960 [2024-11-26 01:04:02.812025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:39.960 [2024-11-26 01:04:02.812033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.960 [2024-11-26 01:04:02.812047] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:39.960 [2024-11-26 01:04:02.812163] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:39.960 [2024-11-26 01:04:02.812174] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:39.960 [2024-11-26 01:04:02.812184] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:39.960 [2024-11-26 01:04:02.812192] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812202] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812211] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:39.960 [2024-11-26 01:04:02.812220] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:39.960 [2024-11-26 01:04:02.812226] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:39.960 [2024-11-26 01:04:02.812235] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:39.960 [2024-11-26 01:04:02.812240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.960 [2024-11-26 01:04:02.812247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:39.960 [2024-11-26 01:04:02.812253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:18:39.960 [2024-11-26 01:04:02.812262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.960 [2024-11-26 01:04:02.812322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.960 [2024-11-26 01:04:02.812330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:39.960 [2024-11-26 01:04:02.812336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:39.960 [2024-11-26 01:04:02.812342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.960 [2024-11-26 01:04:02.812413] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:39.960 [2024-11-26 01:04:02.812423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:39.960 [2024-11-26 01:04:02.812429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:39.960 [2024-11-26 01:04:02.812457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:39.960 [2024-11-26 01:04:02.812475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:39.960 [2024-11-26 01:04:02.812488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:39.960 [2024-11-26 01:04:02.812496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:39.960 [2024-11-26 01:04:02.812501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:39.960 [2024-11-26 01:04:02.812508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:39.960 [2024-11-26 01:04:02.812513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:39.960 [2024-11-26 01:04:02.812520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:39.960 [2024-11-26 01:04:02.812531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:39.960 [2024-11-26 01:04:02.812548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:39.960 [2024-11-26 01:04:02.812565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:39.960 [2024-11-26 01:04:02.812581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:39.960 [2024-11-26 01:04:02.812601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:39.960 [2024-11-26 01:04:02.812617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:39.960 [2024-11-26 01:04:02.812629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:39.960 [2024-11-26 01:04:02.812635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:39.960 [2024-11-26 01:04:02.812640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:39.960 [2024-11-26 01:04:02.812647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:39.960 [2024-11-26 01:04:02.812652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:39.960 [2024-11-26 01:04:02.812658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:39.960 [2024-11-26 01:04:02.812669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:39.960 [2024-11-26 01:04:02.812675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812684] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:39.960 [2024-11-26 01:04:02.812690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:39.960 [2024-11-26 01:04:02.812704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.960 [2024-11-26 01:04:02.812717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:39.960 [2024-11-26 01:04:02.812722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:39.960 [2024-11-26 01:04:02.812729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:39.960 [2024-11-26 01:04:02.812734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:39.960 [2024-11-26 01:04:02.812740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:39.960 [2024-11-26 01:04:02.812745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:39.960 [2024-11-26 01:04:02.812756] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:39.960 [2024-11-26 01:04:02.812765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:39.960 [2024-11-26 01:04:02.812775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:39.960 [2024-11-26 01:04:02.812781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:39.960 [2024-11-26 01:04:02.812788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:39.960 [2024-11-26 01:04:02.812793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:39.960 [2024-11-26 01:04:02.812802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:39.960 [2024-11-26 01:04:02.812807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:39.960 [2024-11-26 01:04:02.812814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:39.960 [2024-11-26 01:04:02.812820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:39.960 [2024-11-26 01:04:02.812827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:39.960 [2024-11-26 01:04:02.812832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:39.961 [2024-11-26 01:04:02.812849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:39.961 [2024-11-26 01:04:02.812856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:39.961 [2024-11-26 01:04:02.812863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:39.961 [2024-11-26 01:04:02.812868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:39.961 [2024-11-26 01:04:02.812875] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:39.961 [2024-11-26 01:04:02.812882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:39.961 [2024-11-26 01:04:02.812890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:39.961 [2024-11-26 01:04:02.812895] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:39.961 [2024-11-26 01:04:02.812902] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:39.961 [2024-11-26 01:04:02.812910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:39.961 [2024-11-26 01:04:02.812920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.961 [2024-11-26 01:04:02.812925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:39.961 [2024-11-26 01:04:02.812935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:18:39.961 [2024-11-26 01:04:02.812941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.961 [2024-11-26 01:04:02.812966] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:39.961 [2024-11-26 01:04:02.812974] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:44.169 [2024-11-26 01:04:06.559096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.559163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:44.169 [2024-11-26 01:04:06.559184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3746.111 ms 00:18:44.169 [2024-11-26 01:04:06.559194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.570549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.570596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.169 [2024-11-26 01:04:06.570615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.263 ms 00:18:44.169 [2024-11-26 01:04:06.570625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.570720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.570733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:44.169 [2024-11-26 01:04:06.570745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:44.169 [2024-11-26 01:04:06.570755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.594602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.594659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.169 [2024-11-26 01:04:06.594682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.797 ms 00:18:44.169 [2024-11-26 01:04:06.594695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.594747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.594762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.169 [2024-11-26 01:04:06.594778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:44.169 [2024-11-26 01:04:06.594790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.595329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.595365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.169 [2024-11-26 01:04:06.595392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:18:44.169 [2024-11-26 01:04:06.595406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.595570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.595585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.169 [2024-11-26 01:04:06.595604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:18:44.169 [2024-11-26 01:04:06.595616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.603426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.603469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.169 [2024-11-26 01:04:06.603486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.778 ms 00:18:44.169 [2024-11-26 01:04:06.603498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.613204] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:44.169 [2024-11-26 01:04:06.619594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.619628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:44.169 [2024-11-26 01:04:06.619637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.023 ms 00:18:44.169 [2024-11-26 01:04:06.619647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.691187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.691235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:44.169 [2024-11-26 01:04:06.691247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.517 ms 00:18:44.169 [2024-11-26 01:04:06.691260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.691451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.691465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:44.169 [2024-11-26 01:04:06.691475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:18:44.169 [2024-11-26 01:04:06.691484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.169 [2024-11-26 01:04:06.695885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.169 [2024-11-26 01:04:06.695925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:44.170 [2024-11-26 01:04:06.695934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 00:18:44.170 [2024-11-26 01:04:06.695944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.170 [2024-11-26 01:04:06.699813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.170 [2024-11-26 01:04:06.699873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:44.170 [2024-11-26 01:04:06.699884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:18:44.170 [2024-11-26 01:04:06.699893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.170 [2024-11-26 01:04:06.700195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.170 [2024-11-26 01:04:06.700212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:44.170 [2024-11-26 01:04:06.700222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:18:44.170 [2024-11-26 01:04:06.700232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.170 [2024-11-26 01:04:06.737778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.170 [2024-11-26 01:04:06.737824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:44.170 [2024-11-26 01:04:06.737836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.510 ms 00:18:44.170 [2024-11-26 01:04:06.737862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.170 [2024-11-26 01:04:06.743960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.170 [2024-11-26 01:04:06.744003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:44.170 [2024-11-26 01:04:06.744014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.044 ms 00:18:44.170 [2024-11-26 01:04:06.744024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.170 [2024-11-26 01:04:06.748472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.170 [2024-11-26 01:04:06.748512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:44.170 [2024-11-26 01:04:06.748522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.413 ms 00:18:44.170 [2024-11-26 01:04:06.748533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.170 [2024-11-26 01:04:06.753633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.170 [2024-11-26 01:04:06.753678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.170 [2024-11-26 01:04:06.753688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.066 ms 00:18:44.170 [2024-11-26 01:04:06.753698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.170 [2024-11-26 01:04:06.753741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.170 [2024-11-26 01:04:06.753754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.170 [2024-11-26 01:04:06.753763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:44.170 [2024-11-26 01:04:06.753773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.170 [2024-11-26 01:04:06.753866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.170 [2024-11-26 01:04:06.753880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.170 [2024-11-26 01:04:06.753890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:44.170 [2024-11-26 01:04:06.753900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.170 [2024-11-26 01:04:06.755020] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3952.985 ms, result 0 00:18:44.170 { 00:18:44.170 "name": "ftl0", 00:18:44.170 "uuid": "a2d792b7-05aa-44fd-a1bf-fe25489dce4f" 00:18:44.170 } 00:18:44.170 01:04:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:44.170 01:04:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:44.170 01:04:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:44.170 01:04:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:44.432 [2024-11-26 01:04:07.089265] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:44.432 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:44.432 Zero copy mechanism will not be used. 00:18:44.432 Running I/O for 4 seconds... 00:18:46.324 679.00 IOPS, 45.09 MiB/s [2024-11-26T01:04:10.183Z] 712.00 IOPS, 47.28 MiB/s [2024-11-26T01:04:11.127Z] 728.33 IOPS, 48.37 MiB/s [2024-11-26T01:04:11.127Z] 741.00 IOPS, 49.21 MiB/s 00:18:48.210 Latency(us) 00:18:48.210 [2024-11-26T01:04:11.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.210 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:48.210 ftl0 : 4.00 741.01 49.21 0.00 0.00 1441.08 519.88 2646.65 00:18:48.210 [2024-11-26T01:04:11.127Z] =================================================================================================================== 00:18:48.210 [2024-11-26T01:04:11.127Z] Total : 741.01 49.21 0.00 0.00 1441.08 519.88 2646.65 00:18:48.210 [2024-11-26 01:04:11.096528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:48.210 { 00:18:48.210 "results": [ 00:18:48.210 { 00:18:48.210 "job": "ftl0", 00:18:48.210 "core_mask": "0x1", 00:18:48.210 "workload": "randwrite", 00:18:48.210 "status": "finished", 00:18:48.210 "queue_depth": 1, 00:18:48.210 "io_size": 69632, 00:18:48.210 "runtime": 4.001317, 00:18:48.210 "iops": 741.0060237666748, 00:18:48.210 "mibps": 49.20743126575575, 00:18:48.210 "io_failed": 0, 00:18:48.210 "io_timeout": 0, 00:18:48.210 "avg_latency_us": 1441.0810844467505, 00:18:48.210 "min_latency_us": 519.876923076923, 00:18:48.210 "max_latency_us": 2646.646153846154 00:18:48.210 } 00:18:48.210 ], 00:18:48.210 "core_count": 1 00:18:48.210 } 00:18:48.210 01:04:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:48.471 [2024-11-26 01:04:11.203256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:48.472 Running I/O for 4 seconds... 00:18:50.425 7576.00 IOPS, 29.59 MiB/s [2024-11-26T01:04:14.288Z] 6131.50 IOPS, 23.95 MiB/s [2024-11-26T01:04:15.233Z] 6125.67 IOPS, 23.93 MiB/s [2024-11-26T01:04:15.495Z] 5928.50 IOPS, 23.16 MiB/s 00:18:52.578 Latency(us) 00:18:52.578 [2024-11-26T01:04:15.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.578 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:52.578 ftl0 : 4.03 5911.47 23.09 0.00 0.00 21569.62 318.23 47387.57 00:18:52.578 [2024-11-26T01:04:15.495Z] =================================================================================================================== 00:18:52.578 [2024-11-26T01:04:15.495Z] Total : 5911.47 23.09 0.00 0.00 21569.62 0.00 47387.57 00:18:52.578 [2024-11-26 01:04:15.241255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:52.578 { 00:18:52.578 "results": [ 00:18:52.578 { 00:18:52.578 "job": "ftl0", 00:18:52.578 "core_mask": "0x1", 00:18:52.578 "workload": "randwrite", 00:18:52.578 "status": "finished", 00:18:52.578 "queue_depth": 128, 00:18:52.578 "io_size": 4096, 00:18:52.578 "runtime": 4.031825, 00:18:52.578 "iops": 5911.466891544152, 00:18:52.578 "mibps": 23.091667545094342, 00:18:52.578 "io_failed": 0, 00:18:52.578 "io_timeout": 0, 00:18:52.578 "avg_latency_us": 21569.62437990976, 00:18:52.578 "min_latency_us": 318.2276923076923, 00:18:52.578 "max_latency_us": 47387.56923076923 00:18:52.578 } 00:18:52.578 ], 00:18:52.578 "core_count": 1 00:18:52.578 } 00:18:52.578 01:04:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:52.578 [2024-11-26 01:04:15.358293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:52.578 Running I/O for 4 seconds... 00:18:54.464 4524.00 IOPS, 17.67 MiB/s [2024-11-26T01:04:18.769Z] 4554.50 IOPS, 17.79 MiB/s [2024-11-26T01:04:19.711Z] 4912.67 IOPS, 19.19 MiB/s [2024-11-26T01:04:19.711Z] 5028.75 IOPS, 19.64 MiB/s 00:18:56.794 Latency(us) 00:18:56.794 [2024-11-26T01:04:19.711Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.794 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:56.794 Verification LBA range: start 0x0 length 0x1400000 00:18:56.794 ftl0 : 4.02 5038.27 19.68 0.00 0.00 25320.44 313.50 39321.60 00:18:56.794 [2024-11-26T01:04:19.711Z] =================================================================================================================== 00:18:56.794 [2024-11-26T01:04:19.711Z] Total : 5038.27 19.68 0.00 0.00 25320.44 0.00 39321.60 00:18:56.794 [2024-11-26 01:04:19.383801] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:56.794 { 00:18:56.794 "results": [ 00:18:56.794 { 00:18:56.794 "job": "ftl0", 00:18:56.794 "core_mask": "0x1", 00:18:56.794 "workload": "verify", 00:18:56.794 "status": "finished", 00:18:56.794 "verify_range": { 00:18:56.794 "start": 0, 00:18:56.794 "length": 20971520 00:18:56.794 }, 00:18:56.794 "queue_depth": 128, 00:18:56.794 "io_size": 4096, 00:18:56.794 "runtime": 4.015863, 00:18:56.794 "iops": 5038.2694827985915, 00:18:56.794 "mibps": 19.680740167181998, 00:18:56.794 "io_failed": 0, 00:18:56.794 "io_timeout": 0, 00:18:56.794 "avg_latency_us": 25320.44425093811, 00:18:56.794 "min_latency_us": 313.5015384615385, 00:18:56.794 "max_latency_us": 39321.6 00:18:56.794 } 00:18:56.794 ], 00:18:56.794 "core_count": 1 00:18:56.794 } 00:18:56.794 01:04:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:56.794 [2024-11-26 01:04:19.600198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.794 [2024-11-26 01:04:19.600260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:56.794 [2024-11-26 01:04:19.600274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:56.794 [2024-11-26 01:04:19.600287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.794 [2024-11-26 01:04:19.600313] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:56.794 [2024-11-26 01:04:19.601299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.794 [2024-11-26 01:04:19.601339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:56.794 [2024-11-26 01:04:19.601365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:18:56.794 [2024-11-26 01:04:19.601375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.794 [2024-11-26 01:04:19.604653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.794 [2024-11-26 01:04:19.604703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:56.794 [2024-11-26 01:04:19.604721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:18:56.794 [2024-11-26 01:04:19.604731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.823818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.057 [2024-11-26 01:04:19.823882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:57.057 [2024-11-26 01:04:19.823904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 219.060 ms 00:18:57.057 [2024-11-26 01:04:19.823913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.830224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.057 [2024-11-26 01:04:19.830267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:57.057 [2024-11-26 01:04:19.830283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.272 ms 00:18:57.057 [2024-11-26 01:04:19.830294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.833168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.057 [2024-11-26 01:04:19.833221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:57.057 [2024-11-26 01:04:19.833237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:18:57.057 [2024-11-26 01:04:19.833246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.840817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.057 [2024-11-26 01:04:19.840899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:57.057 [2024-11-26 01:04:19.840921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.517 ms 00:18:57.057 [2024-11-26 01:04:19.840932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.841074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.057 [2024-11-26 01:04:19.841090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:57.057 [2024-11-26 01:04:19.841109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:57.057 [2024-11-26 01:04:19.841119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.844954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.057 [2024-11-26 01:04:19.845009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:57.057 [2024-11-26 01:04:19.845024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.807 ms 00:18:57.057 [2024-11-26 01:04:19.845033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.847888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.057 [2024-11-26 01:04:19.847934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:57.057 [2024-11-26 01:04:19.847950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.801 ms 00:18:57.057 [2024-11-26 01:04:19.847959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.850168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.057 [2024-11-26 01:04:19.850376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:57.057 [2024-11-26 01:04:19.850441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:18:57.057 [2024-11-26 01:04:19.850451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.852700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.057 [2024-11-26 01:04:19.852751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:57.057 [2024-11-26 01:04:19.852764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.173 ms 00:18:57.057 [2024-11-26 01:04:19.852772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.057 [2024-11-26 01:04:19.852815] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:57.057 [2024-11-26 01:04:19.852837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.852993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:57.057 [2024-11-26 01:04:19.853256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:57.058 [2024-11-26 01:04:19.853861] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:57.058 [2024-11-26 01:04:19.853879] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a2d792b7-05aa-44fd-a1bf-fe25489dce4f 00:18:57.058 [2024-11-26 01:04:19.853889] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:57.058 [2024-11-26 01:04:19.853900] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:57.058 [2024-11-26 01:04:19.853909] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:57.058 [2024-11-26 01:04:19.853926] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:57.058 [2024-11-26 01:04:19.853934] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:57.058 [2024-11-26 01:04:19.853945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:57.058 [2024-11-26 01:04:19.853958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:57.058 [2024-11-26 01:04:19.853968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:57.058 [2024-11-26 01:04:19.853982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:57.058 [2024-11-26 01:04:19.853992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.058 [2024-11-26 01:04:19.854001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:57.058 [2024-11-26 01:04:19.854015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.180 ms 00:18:57.058 [2024-11-26 01:04:19.854025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.058 [2024-11-26 01:04:19.857139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.058 [2024-11-26 01:04:19.857335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:57.058 [2024-11-26 01:04:19.857362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.091 ms 00:18:57.058 [2024-11-26 01:04:19.857372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.058 [2024-11-26 01:04:19.857526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.058 [2024-11-26 01:04:19.857540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:57.058 [2024-11-26 01:04:19.857559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:18:57.058 [2024-11-26 01:04:19.857566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.058 [2024-11-26 01:04:19.868422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.058 [2024-11-26 01:04:19.868608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.058 [2024-11-26 01:04:19.868640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.058 [2024-11-26 01:04:19.868650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.058 [2024-11-26 01:04:19.868720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.058 [2024-11-26 01:04:19.868736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.058 [2024-11-26 01:04:19.868747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.058 [2024-11-26 01:04:19.868756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.058 [2024-11-26 01:04:19.868881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.058 [2024-11-26 01:04:19.868894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.058 [2024-11-26 01:04:19.868906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.058 [2024-11-26 01:04:19.868915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.058 [2024-11-26 01:04:19.868936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.058 [2024-11-26 01:04:19.868947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.058 [2024-11-26 01:04:19.868964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.059 [2024-11-26 01:04:19.868972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.059 [2024-11-26 01:04:19.888816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.059 [2024-11-26 01:04:19.888912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.059 [2024-11-26 01:04:19.888928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.059 [2024-11-26 01:04:19.888937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.059 [2024-11-26 01:04:19.904816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.059 [2024-11-26 01:04:19.904912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.059 [2024-11-26 01:04:19.904932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.059 [2024-11-26 01:04:19.904941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.059 [2024-11-26 01:04:19.905075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.059 [2024-11-26 01:04:19.905089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.059 [2024-11-26 01:04:19.905101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.059 [2024-11-26 01:04:19.905110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.059 [2024-11-26 01:04:19.905164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.059 [2024-11-26 01:04:19.905176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.059 [2024-11-26 01:04:19.905191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.059 [2024-11-26 01:04:19.905202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.059 [2024-11-26 01:04:19.905289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.059 [2024-11-26 01:04:19.905301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.059 [2024-11-26 01:04:19.905312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.059 [2024-11-26 01:04:19.905320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.059 [2024-11-26 01:04:19.905360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.059 [2024-11-26 01:04:19.905371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:57.059 [2024-11-26 01:04:19.905383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.059 [2024-11-26 01:04:19.905391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.059 [2024-11-26 01:04:19.905446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.059 [2024-11-26 01:04:19.905457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.059 [2024-11-26 01:04:19.905469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.059 [2024-11-26 01:04:19.905480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.059 [2024-11-26 01:04:19.905541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.059 [2024-11-26 01:04:19.905553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.059 [2024-11-26 01:04:19.905567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.059 [2024-11-26 01:04:19.905579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.059 [2024-11-26 01:04:19.905769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 305.501 ms, result 0 00:18:57.059 true 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 88835 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 88835 ']' 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 88835 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88835 00:18:57.059 killing process with pid 88835 00:18:57.059 Received shutdown signal, test time was about 4.000000 seconds 00:18:57.059 00:18:57.059 Latency(us) 00:18:57.059 [2024-11-26T01:04:19.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.059 [2024-11-26T01:04:19.976Z] =================================================================================================================== 00:18:57.059 [2024-11-26T01:04:19.976Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88835' 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 88835 00:18:57.059 01:04:19 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 88835 00:18:59.608 Remove shared memory files 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:59.608 ************************************ 00:18:59.608 END TEST ftl_bdevperf 00:18:59.608 ************************************ 00:18:59.608 00:18:59.608 real 0m23.266s 00:18:59.608 user 0m25.581s 00:18:59.608 sys 0m0.884s 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:59.608 01:04:22 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:59.870 01:04:22 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:59.870 01:04:22 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:59.870 01:04:22 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:59.870 01:04:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:59.870 ************************************ 00:18:59.870 START TEST ftl_trim 00:18:59.870 ************************************ 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:59.870 * Looking for test storage... 00:18:59.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.870 01:04:22 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:59.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.870 --rc genhtml_branch_coverage=1 00:18:59.870 --rc genhtml_function_coverage=1 00:18:59.870 --rc genhtml_legend=1 00:18:59.870 --rc geninfo_all_blocks=1 00:18:59.870 --rc geninfo_unexecuted_blocks=1 00:18:59.870 00:18:59.870 ' 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:59.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.870 --rc genhtml_branch_coverage=1 00:18:59.870 --rc genhtml_function_coverage=1 00:18:59.870 --rc genhtml_legend=1 00:18:59.870 --rc geninfo_all_blocks=1 00:18:59.870 --rc geninfo_unexecuted_blocks=1 00:18:59.870 00:18:59.870 ' 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:59.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.870 --rc genhtml_branch_coverage=1 00:18:59.870 --rc genhtml_function_coverage=1 00:18:59.870 --rc genhtml_legend=1 00:18:59.870 --rc geninfo_all_blocks=1 00:18:59.870 --rc geninfo_unexecuted_blocks=1 00:18:59.870 00:18:59.870 ' 00:18:59.870 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:59.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.870 --rc genhtml_branch_coverage=1 00:18:59.870 --rc genhtml_function_coverage=1 00:18:59.870 --rc genhtml_legend=1 00:18:59.870 --rc geninfo_all_blocks=1 00:18:59.870 --rc geninfo_unexecuted_blocks=1 00:18:59.870 00:18:59.870 ' 00:18:59.870 01:04:22 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:59.870 01:04:22 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:59.870 01:04:22 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:59.870 01:04:22 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:59.870 01:04:22 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:59.870 01:04:22 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89177 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:59.871 01:04:22 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89177 00:18:59.871 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89177 ']' 00:18:59.871 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.871 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.871 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.871 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.871 01:04:22 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:00.133 [2024-11-26 01:04:22.830369] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:19:00.133 [2024-11-26 01:04:22.830750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89177 ] 00:19:00.133 [2024-11-26 01:04:22.970755] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:00.133 [2024-11-26 01:04:22.997801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:00.133 [2024-11-26 01:04:23.041302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.133 [2024-11-26 01:04:23.041729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.133 [2024-11-26 01:04:23.041653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.079 01:04:23 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.079 01:04:23 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:01.079 01:04:23 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:01.079 01:04:23 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:01.079 01:04:23 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:01.079 01:04:23 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:01.079 01:04:23 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:01.079 01:04:23 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:01.079 01:04:23 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:01.079 01:04:23 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:01.341 01:04:23 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:01.341 01:04:23 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:01.341 01:04:23 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:01.341 01:04:23 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:01.341 01:04:23 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:01.341 01:04:23 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:01.341 01:04:24 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:01.341 { 00:19:01.341 "name": "nvme0n1", 00:19:01.341 "aliases": [ 00:19:01.341 "01c5bff3-0345-4570-9c3b-5485bc4894ca" 00:19:01.341 ], 00:19:01.341 "product_name": "NVMe disk", 00:19:01.341 "block_size": 4096, 00:19:01.341 "num_blocks": 1310720, 00:19:01.341 "uuid": "01c5bff3-0345-4570-9c3b-5485bc4894ca", 00:19:01.341 "numa_id": -1, 00:19:01.341 "assigned_rate_limits": { 00:19:01.341 "rw_ios_per_sec": 0, 00:19:01.341 "rw_mbytes_per_sec": 0, 00:19:01.341 "r_mbytes_per_sec": 0, 00:19:01.341 "w_mbytes_per_sec": 0 00:19:01.341 }, 00:19:01.341 "claimed": true, 00:19:01.341 "claim_type": "read_many_write_one", 00:19:01.341 "zoned": false, 00:19:01.341 "supported_io_types": { 00:19:01.341 "read": true, 00:19:01.341 "write": true, 00:19:01.341 "unmap": true, 00:19:01.341 "flush": true, 00:19:01.341 "reset": true, 00:19:01.341 "nvme_admin": true, 00:19:01.341 "nvme_io": true, 00:19:01.341 "nvme_io_md": false, 00:19:01.341 "write_zeroes": true, 00:19:01.341 "zcopy": false, 00:19:01.341 "get_zone_info": false, 00:19:01.341 "zone_management": false, 00:19:01.341 "zone_append": false, 00:19:01.341 "compare": true, 00:19:01.341 "compare_and_write": false, 00:19:01.341 "abort": true, 00:19:01.341 "seek_hole": false, 00:19:01.341 "seek_data": false, 00:19:01.341 "copy": true, 00:19:01.341 "nvme_iov_md": false 00:19:01.341 }, 00:19:01.341 "driver_specific": { 00:19:01.341 "nvme": [ 00:19:01.341 { 00:19:01.341 "pci_address": "0000:00:11.0", 00:19:01.341 "trid": { 00:19:01.341 "trtype": "PCIe", 00:19:01.341 "traddr": "0000:00:11.0" 00:19:01.341 }, 00:19:01.341 "ctrlr_data": { 00:19:01.341 "cntlid": 0, 00:19:01.341 "vendor_id": "0x1b36", 00:19:01.341 "model_number": "QEMU NVMe Ctrl", 00:19:01.341 "serial_number": "12341", 00:19:01.341 "firmware_revision": "8.0.0", 00:19:01.341 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:01.341 "oacs": { 00:19:01.341 "security": 0, 00:19:01.341 "format": 1, 00:19:01.341 "firmware": 0, 00:19:01.341 "ns_manage": 1 00:19:01.341 }, 00:19:01.341 "multi_ctrlr": false, 00:19:01.341 "ana_reporting": false 00:19:01.341 }, 00:19:01.341 "vs": { 00:19:01.341 "nvme_version": "1.4" 00:19:01.341 }, 00:19:01.341 "ns_data": { 00:19:01.341 "id": 1, 00:19:01.341 "can_share": false 00:19:01.341 } 00:19:01.341 } 00:19:01.341 ], 00:19:01.341 "mp_policy": "active_passive" 00:19:01.341 } 00:19:01.341 } 00:19:01.341 ]' 00:19:01.341 01:04:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:01.603 01:04:24 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:01.603 01:04:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:01.603 01:04:24 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:01.603 01:04:24 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:01.603 01:04:24 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:01.603 01:04:24 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:01.603 01:04:24 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:01.603 01:04:24 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:01.603 01:04:24 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:01.603 01:04:24 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:01.603 01:04:24 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=d5e6b1e6-e607-492d-91c3-2384db859c29 00:19:01.603 01:04:24 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:01.603 01:04:24 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5e6b1e6-e607-492d-91c3-2384db859c29 00:19:01.865 01:04:24 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:02.127 01:04:24 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=5de077d4-ec85-4e78-b34c-9bf87bfe4c7f 00:19:02.127 01:04:24 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5de077d4-ec85-4e78-b34c-9bf87bfe4c7f 00:19:02.388 01:04:25 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:02.388 01:04:25 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:02.388 01:04:25 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:02.388 01:04:25 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:02.388 01:04:25 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:02.388 01:04:25 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:02.388 01:04:25 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:02.388 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:02.388 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:02.388 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:02.388 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:02.388 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:02.649 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:02.649 { 00:19:02.649 "name": "928d8577-b0bf-499d-ae56-c13d3ec59d10", 00:19:02.649 "aliases": [ 00:19:02.649 "lvs/nvme0n1p0" 00:19:02.649 ], 00:19:02.649 "product_name": "Logical Volume", 00:19:02.649 "block_size": 4096, 00:19:02.649 "num_blocks": 26476544, 00:19:02.649 "uuid": "928d8577-b0bf-499d-ae56-c13d3ec59d10", 00:19:02.649 "assigned_rate_limits": { 00:19:02.649 "rw_ios_per_sec": 0, 00:19:02.649 "rw_mbytes_per_sec": 0, 00:19:02.649 "r_mbytes_per_sec": 0, 00:19:02.649 "w_mbytes_per_sec": 0 00:19:02.649 }, 00:19:02.649 "claimed": false, 00:19:02.649 "zoned": false, 00:19:02.649 "supported_io_types": { 00:19:02.649 "read": true, 00:19:02.649 "write": true, 00:19:02.649 "unmap": true, 00:19:02.649 "flush": false, 00:19:02.649 "reset": true, 00:19:02.649 "nvme_admin": false, 00:19:02.649 "nvme_io": false, 00:19:02.649 "nvme_io_md": false, 00:19:02.649 "write_zeroes": true, 00:19:02.649 "zcopy": false, 00:19:02.649 "get_zone_info": false, 00:19:02.649 "zone_management": false, 00:19:02.649 "zone_append": false, 00:19:02.649 "compare": false, 00:19:02.649 "compare_and_write": false, 00:19:02.649 "abort": false, 00:19:02.649 "seek_hole": true, 00:19:02.649 "seek_data": true, 00:19:02.649 "copy": false, 00:19:02.649 "nvme_iov_md": false 00:19:02.649 }, 00:19:02.649 "driver_specific": { 00:19:02.649 "lvol": { 00:19:02.649 "lvol_store_uuid": "5de077d4-ec85-4e78-b34c-9bf87bfe4c7f", 00:19:02.649 "base_bdev": "nvme0n1", 00:19:02.649 "thin_provision": true, 00:19:02.649 "num_allocated_clusters": 0, 00:19:02.649 "snapshot": false, 00:19:02.649 "clone": false, 00:19:02.649 "esnap_clone": false 00:19:02.649 } 00:19:02.649 } 00:19:02.649 } 00:19:02.649 ]' 00:19:02.649 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:02.649 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:02.649 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:02.649 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:02.649 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:02.649 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:02.649 01:04:25 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:02.649 01:04:25 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:02.649 01:04:25 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:02.910 01:04:25 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:02.910 01:04:25 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:02.910 01:04:25 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:02.910 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:02.910 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:02.910 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:02.910 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:02.910 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:03.170 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:03.170 { 00:19:03.170 "name": "928d8577-b0bf-499d-ae56-c13d3ec59d10", 00:19:03.170 "aliases": [ 00:19:03.170 "lvs/nvme0n1p0" 00:19:03.170 ], 00:19:03.170 "product_name": "Logical Volume", 00:19:03.170 "block_size": 4096, 00:19:03.170 "num_blocks": 26476544, 00:19:03.170 "uuid": "928d8577-b0bf-499d-ae56-c13d3ec59d10", 00:19:03.170 "assigned_rate_limits": { 00:19:03.170 "rw_ios_per_sec": 0, 00:19:03.170 "rw_mbytes_per_sec": 0, 00:19:03.170 "r_mbytes_per_sec": 0, 00:19:03.170 "w_mbytes_per_sec": 0 00:19:03.170 }, 00:19:03.170 "claimed": false, 00:19:03.170 "zoned": false, 00:19:03.170 "supported_io_types": { 00:19:03.170 "read": true, 00:19:03.170 "write": true, 00:19:03.170 "unmap": true, 00:19:03.170 "flush": false, 00:19:03.170 "reset": true, 00:19:03.170 "nvme_admin": false, 00:19:03.170 "nvme_io": false, 00:19:03.170 "nvme_io_md": false, 00:19:03.170 "write_zeroes": true, 00:19:03.170 "zcopy": false, 00:19:03.170 "get_zone_info": false, 00:19:03.170 "zone_management": false, 00:19:03.170 "zone_append": false, 00:19:03.170 "compare": false, 00:19:03.170 "compare_and_write": false, 00:19:03.170 "abort": false, 00:19:03.170 "seek_hole": true, 00:19:03.170 "seek_data": true, 00:19:03.170 "copy": false, 00:19:03.170 "nvme_iov_md": false 00:19:03.170 }, 00:19:03.170 "driver_specific": { 00:19:03.170 "lvol": { 00:19:03.170 "lvol_store_uuid": "5de077d4-ec85-4e78-b34c-9bf87bfe4c7f", 00:19:03.170 "base_bdev": "nvme0n1", 00:19:03.170 "thin_provision": true, 00:19:03.170 "num_allocated_clusters": 0, 00:19:03.170 "snapshot": false, 00:19:03.170 "clone": false, 00:19:03.170 "esnap_clone": false 00:19:03.170 } 00:19:03.170 } 00:19:03.170 } 00:19:03.170 ]' 00:19:03.170 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:03.170 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:03.170 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:03.170 01:04:25 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:03.170 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:03.170 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:03.170 01:04:26 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:03.170 01:04:26 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:03.427 01:04:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:03.428 01:04:26 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:03.428 01:04:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:03.428 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:03.428 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:03.428 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:03.428 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:03.428 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 928d8577-b0bf-499d-ae56-c13d3ec59d10 00:19:03.686 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:03.686 { 00:19:03.686 "name": "928d8577-b0bf-499d-ae56-c13d3ec59d10", 00:19:03.686 "aliases": [ 00:19:03.686 "lvs/nvme0n1p0" 00:19:03.686 ], 00:19:03.686 "product_name": "Logical Volume", 00:19:03.686 "block_size": 4096, 00:19:03.686 "num_blocks": 26476544, 00:19:03.686 "uuid": "928d8577-b0bf-499d-ae56-c13d3ec59d10", 00:19:03.686 "assigned_rate_limits": { 00:19:03.686 "rw_ios_per_sec": 0, 00:19:03.686 "rw_mbytes_per_sec": 0, 00:19:03.686 "r_mbytes_per_sec": 0, 00:19:03.686 "w_mbytes_per_sec": 0 00:19:03.686 }, 00:19:03.686 "claimed": false, 00:19:03.686 "zoned": false, 00:19:03.686 "supported_io_types": { 00:19:03.686 "read": true, 00:19:03.686 "write": true, 00:19:03.686 "unmap": true, 00:19:03.686 "flush": false, 00:19:03.686 "reset": true, 00:19:03.686 "nvme_admin": false, 00:19:03.686 "nvme_io": false, 00:19:03.686 "nvme_io_md": false, 00:19:03.686 "write_zeroes": true, 00:19:03.686 "zcopy": false, 00:19:03.686 "get_zone_info": false, 00:19:03.686 "zone_management": false, 00:19:03.686 "zone_append": false, 00:19:03.686 "compare": false, 00:19:03.686 "compare_and_write": false, 00:19:03.686 "abort": false, 00:19:03.686 "seek_hole": true, 00:19:03.686 "seek_data": true, 00:19:03.686 "copy": false, 00:19:03.686 "nvme_iov_md": false 00:19:03.686 }, 00:19:03.686 "driver_specific": { 00:19:03.686 "lvol": { 00:19:03.686 "lvol_store_uuid": "5de077d4-ec85-4e78-b34c-9bf87bfe4c7f", 00:19:03.686 "base_bdev": "nvme0n1", 00:19:03.686 "thin_provision": true, 00:19:03.686 "num_allocated_clusters": 0, 00:19:03.686 "snapshot": false, 00:19:03.686 "clone": false, 00:19:03.686 "esnap_clone": false 00:19:03.686 } 00:19:03.686 } 00:19:03.686 } 00:19:03.686 ]' 00:19:03.686 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:03.686 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:03.686 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:03.686 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:03.686 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:03.686 01:04:26 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:03.686 01:04:26 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:03.686 01:04:26 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 928d8577-b0bf-499d-ae56-c13d3ec59d10 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:03.946 [2024-11-26 01:04:26.639632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.639811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:03.946 [2024-11-26 01:04:26.639833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:03.946 [2024-11-26 01:04:26.639872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.641928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.641957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:03.946 [2024-11-26 01:04:26.641968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:19:03.946 [2024-11-26 01:04:26.641977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.642082] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:03.946 [2024-11-26 01:04:26.642282] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:03.946 [2024-11-26 01:04:26.642305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.642313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:03.946 [2024-11-26 01:04:26.642323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:03.946 [2024-11-26 01:04:26.642329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.642413] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0d18485b-f600-4eee-9447-45835e109f8e 00:19:03.946 [2024-11-26 01:04:26.643684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.643714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:03.946 [2024-11-26 01:04:26.643723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:03.946 [2024-11-26 01:04:26.643734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.650545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.650673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:03.946 [2024-11-26 01:04:26.650685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.728 ms 00:19:03.946 [2024-11-26 01:04:26.650707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.650825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.650836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:03.946 [2024-11-26 01:04:26.650862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:03.946 [2024-11-26 01:04:26.650880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.650908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.650917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:03.946 [2024-11-26 01:04:26.650924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:03.946 [2024-11-26 01:04:26.650932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.650975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:03.946 [2024-11-26 01:04:26.652585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.652613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:03.946 [2024-11-26 01:04:26.652623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:19:03.946 [2024-11-26 01:04:26.652629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.652682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.652690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:03.946 [2024-11-26 01:04:26.652701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:03.946 [2024-11-26 01:04:26.652708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.652735] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:03.946 [2024-11-26 01:04:26.652854] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:03.946 [2024-11-26 01:04:26.652878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:03.946 [2024-11-26 01:04:26.652887] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:03.946 [2024-11-26 01:04:26.652897] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:03.946 [2024-11-26 01:04:26.652904] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:03.946 [2024-11-26 01:04:26.652921] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:03.946 [2024-11-26 01:04:26.652927] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:03.946 [2024-11-26 01:04:26.652936] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:03.946 [2024-11-26 01:04:26.652944] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:03.946 [2024-11-26 01:04:26.652953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.652959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:03.946 [2024-11-26 01:04:26.652967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:19:03.946 [2024-11-26 01:04:26.652973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.653055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.946 [2024-11-26 01:04:26.653062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:03.946 [2024-11-26 01:04:26.653070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:03.946 [2024-11-26 01:04:26.653076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.946 [2024-11-26 01:04:26.653180] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:03.946 [2024-11-26 01:04:26.653192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:03.946 [2024-11-26 01:04:26.653201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:03.946 [2024-11-26 01:04:26.653207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:03.946 [2024-11-26 01:04:26.653220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:03.946 [2024-11-26 01:04:26.653231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:03.946 [2024-11-26 01:04:26.653238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:03.946 [2024-11-26 01:04:26.653251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:03.946 [2024-11-26 01:04:26.653256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:03.946 [2024-11-26 01:04:26.653265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:03.946 [2024-11-26 01:04:26.653270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:03.946 [2024-11-26 01:04:26.653277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:03.946 [2024-11-26 01:04:26.653283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:03.946 [2024-11-26 01:04:26.653294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:03.946 [2024-11-26 01:04:26.653300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:03.946 [2024-11-26 01:04:26.653313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:03.946 [2024-11-26 01:04:26.653325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:03.946 [2024-11-26 01:04:26.653329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:03.946 [2024-11-26 01:04:26.653344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:03.946 [2024-11-26 01:04:26.653352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:03.946 [2024-11-26 01:04:26.653365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:03.946 [2024-11-26 01:04:26.653371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:03.946 [2024-11-26 01:04:26.653383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:03.946 [2024-11-26 01:04:26.653395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:03.946 [2024-11-26 01:04:26.653400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:03.946 [2024-11-26 01:04:26.653406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:03.946 [2024-11-26 01:04:26.653411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:03.946 [2024-11-26 01:04:26.653418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:03.946 [2024-11-26 01:04:26.653424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:03.946 [2024-11-26 01:04:26.653430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:03.946 [2024-11-26 01:04:26.653435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.947 [2024-11-26 01:04:26.653443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:03.947 [2024-11-26 01:04:26.653447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:03.947 [2024-11-26 01:04:26.653454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.947 [2024-11-26 01:04:26.653459] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:03.947 [2024-11-26 01:04:26.653468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:03.947 [2024-11-26 01:04:26.653474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:03.947 [2024-11-26 01:04:26.653481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:03.947 [2024-11-26 01:04:26.653486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:03.947 [2024-11-26 01:04:26.653493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:03.947 [2024-11-26 01:04:26.653498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:03.947 [2024-11-26 01:04:26.653505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:03.947 [2024-11-26 01:04:26.653509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:03.947 [2024-11-26 01:04:26.653516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:03.947 [2024-11-26 01:04:26.653523] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:03.947 [2024-11-26 01:04:26.653533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:03.947 [2024-11-26 01:04:26.653541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:03.947 [2024-11-26 01:04:26.653548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:03.947 [2024-11-26 01:04:26.653554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:03.947 [2024-11-26 01:04:26.653561] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:03.947 [2024-11-26 01:04:26.653567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:03.947 [2024-11-26 01:04:26.653575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:03.947 [2024-11-26 01:04:26.653581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:03.947 [2024-11-26 01:04:26.653589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:03.947 [2024-11-26 01:04:26.653595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:03.947 [2024-11-26 01:04:26.653602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:03.947 [2024-11-26 01:04:26.653607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:03.947 [2024-11-26 01:04:26.653614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:03.947 [2024-11-26 01:04:26.653620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:03.947 [2024-11-26 01:04:26.653628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:03.947 [2024-11-26 01:04:26.653633] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:03.947 [2024-11-26 01:04:26.653641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:03.947 [2024-11-26 01:04:26.653656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:03.947 [2024-11-26 01:04:26.653664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:03.947 [2024-11-26 01:04:26.653669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:03.947 [2024-11-26 01:04:26.653676] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:03.947 [2024-11-26 01:04:26.653682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.947 [2024-11-26 01:04:26.653699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:03.947 [2024-11-26 01:04:26.653705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:19:03.947 [2024-11-26 01:04:26.653712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.947 [2024-11-26 01:04:26.653784] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:03.947 [2024-11-26 01:04:26.653794] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:06.476 [2024-11-26 01:04:28.790988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.791168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:06.476 [2024-11-26 01:04:28.791321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2137.196 ms 00:19:06.476 [2024-11-26 01:04:28.791352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.801966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.802112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:06.476 [2024-11-26 01:04:28.802129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.442 ms 00:19:06.476 [2024-11-26 01:04:28.802143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.802277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.802290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:06.476 [2024-11-26 01:04:28.802302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:06.476 [2024-11-26 01:04:28.802311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.821656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.821697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:06.476 [2024-11-26 01:04:28.821709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.313 ms 00:19:06.476 [2024-11-26 01:04:28.821720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.821808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.821825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:06.476 [2024-11-26 01:04:28.821834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:06.476 [2024-11-26 01:04:28.821863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.822288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.822320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:06.476 [2024-11-26 01:04:28.822331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:19:06.476 [2024-11-26 01:04:28.822343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.822470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.822481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:06.476 [2024-11-26 01:04:28.822492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:06.476 [2024-11-26 01:04:28.822504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.829521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.829559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:06.476 [2024-11-26 01:04:28.829583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.985 ms 00:19:06.476 [2024-11-26 01:04:28.829605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.839819] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:06.476 [2024-11-26 01:04:28.856971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.856999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:06.476 [2024-11-26 01:04:28.857013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.265 ms 00:19:06.476 [2024-11-26 01:04:28.857021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.910415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.910457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:06.476 [2024-11-26 01:04:28.910474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.299 ms 00:19:06.476 [2024-11-26 01:04:28.910485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.910694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.910706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:06.476 [2024-11-26 01:04:28.910718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:19:06.476 [2024-11-26 01:04:28.910726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.913777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.913808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:06.476 [2024-11-26 01:04:28.913821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.012 ms 00:19:06.476 [2024-11-26 01:04:28.913829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.916140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.916168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:06.476 [2024-11-26 01:04:28.916181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:19:06.476 [2024-11-26 01:04:28.916189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.916507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.916523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:06.476 [2024-11-26 01:04:28.916536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:19:06.476 [2024-11-26 01:04:28.916543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.946711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.946750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:06.476 [2024-11-26 01:04:28.946768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.129 ms 00:19:06.476 [2024-11-26 01:04:28.946789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.950920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.950952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:06.476 [2024-11-26 01:04:28.950965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.024 ms 00:19:06.476 [2024-11-26 01:04:28.950985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.953706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.953877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:06.476 [2024-11-26 01:04:28.953898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.662 ms 00:19:06.476 [2024-11-26 01:04:28.953907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.957647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.957773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:06.476 [2024-11-26 01:04:28.957794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:19:06.476 [2024-11-26 01:04:28.957802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.957893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.957905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:06.476 [2024-11-26 01:04:28.957916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:06.476 [2024-11-26 01:04:28.957923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.958001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.476 [2024-11-26 01:04:28.958010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:06.476 [2024-11-26 01:04:28.958020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:06.476 [2024-11-26 01:04:28.958028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.476 [2024-11-26 01:04:28.959190] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:06.476 [2024-11-26 01:04:28.960204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2319.239 ms, result 0 00:19:06.476 [2024-11-26 01:04:28.960924] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:06.476 { 00:19:06.476 "name": "ftl0", 00:19:06.476 "uuid": "0d18485b-f600-4eee-9447-45835e109f8e" 00:19:06.476 } 00:19:06.476 01:04:28 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:06.476 01:04:28 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:06.476 01:04:28 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:06.476 01:04:28 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:06.476 01:04:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:06.476 01:04:28 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:06.476 01:04:28 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:06.476 01:04:29 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:06.476 [ 00:19:06.476 { 00:19:06.476 "name": "ftl0", 00:19:06.476 "aliases": [ 00:19:06.476 "0d18485b-f600-4eee-9447-45835e109f8e" 00:19:06.476 ], 00:19:06.476 "product_name": "FTL disk", 00:19:06.476 "block_size": 4096, 00:19:06.476 "num_blocks": 23592960, 00:19:06.476 "uuid": "0d18485b-f600-4eee-9447-45835e109f8e", 00:19:06.476 "assigned_rate_limits": { 00:19:06.476 "rw_ios_per_sec": 0, 00:19:06.476 "rw_mbytes_per_sec": 0, 00:19:06.476 "r_mbytes_per_sec": 0, 00:19:06.476 "w_mbytes_per_sec": 0 00:19:06.476 }, 00:19:06.476 "claimed": false, 00:19:06.476 "zoned": false, 00:19:06.476 "supported_io_types": { 00:19:06.476 "read": true, 00:19:06.476 "write": true, 00:19:06.476 "unmap": true, 00:19:06.476 "flush": true, 00:19:06.476 "reset": false, 00:19:06.476 "nvme_admin": false, 00:19:06.476 "nvme_io": false, 00:19:06.476 "nvme_io_md": false, 00:19:06.476 "write_zeroes": true, 00:19:06.476 "zcopy": false, 00:19:06.476 "get_zone_info": false, 00:19:06.476 "zone_management": false, 00:19:06.476 "zone_append": false, 00:19:06.476 "compare": false, 00:19:06.476 "compare_and_write": false, 00:19:06.476 "abort": false, 00:19:06.476 "seek_hole": false, 00:19:06.476 "seek_data": false, 00:19:06.476 "copy": false, 00:19:06.476 "nvme_iov_md": false 00:19:06.476 }, 00:19:06.476 "driver_specific": { 00:19:06.476 "ftl": { 00:19:06.476 "base_bdev": "928d8577-b0bf-499d-ae56-c13d3ec59d10", 00:19:06.476 "cache": "nvc0n1p0" 00:19:06.476 } 00:19:06.476 } 00:19:06.476 } 00:19:06.476 ] 00:19:06.476 01:04:29 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:06.476 01:04:29 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:06.476 01:04:29 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:06.734 01:04:29 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:06.734 01:04:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:06.992 01:04:29 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:06.992 { 00:19:06.992 "name": "ftl0", 00:19:06.992 "aliases": [ 00:19:06.992 "0d18485b-f600-4eee-9447-45835e109f8e" 00:19:06.992 ], 00:19:06.992 "product_name": "FTL disk", 00:19:06.992 "block_size": 4096, 00:19:06.992 "num_blocks": 23592960, 00:19:06.992 "uuid": "0d18485b-f600-4eee-9447-45835e109f8e", 00:19:06.992 "assigned_rate_limits": { 00:19:06.992 "rw_ios_per_sec": 0, 00:19:06.992 "rw_mbytes_per_sec": 0, 00:19:06.992 "r_mbytes_per_sec": 0, 00:19:06.992 "w_mbytes_per_sec": 0 00:19:06.992 }, 00:19:06.992 "claimed": false, 00:19:06.992 "zoned": false, 00:19:06.992 "supported_io_types": { 00:19:06.992 "read": true, 00:19:06.992 "write": true, 00:19:06.992 "unmap": true, 00:19:06.992 "flush": true, 00:19:06.992 "reset": false, 00:19:06.992 "nvme_admin": false, 00:19:06.992 "nvme_io": false, 00:19:06.992 "nvme_io_md": false, 00:19:06.992 "write_zeroes": true, 00:19:06.992 "zcopy": false, 00:19:06.992 "get_zone_info": false, 00:19:06.992 "zone_management": false, 00:19:06.992 "zone_append": false, 00:19:06.992 "compare": false, 00:19:06.992 "compare_and_write": false, 00:19:06.992 "abort": false, 00:19:06.992 "seek_hole": false, 00:19:06.993 "seek_data": false, 00:19:06.993 "copy": false, 00:19:06.993 "nvme_iov_md": false 00:19:06.993 }, 00:19:06.993 "driver_specific": { 00:19:06.993 "ftl": { 00:19:06.993 "base_bdev": "928d8577-b0bf-499d-ae56-c13d3ec59d10", 00:19:06.993 "cache": "nvc0n1p0" 00:19:06.993 } 00:19:06.993 } 00:19:06.993 } 00:19:06.993 ]' 00:19:06.993 01:04:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:06.993 01:04:29 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:06.993 01:04:29 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:07.252 [2024-11-26 01:04:29.984927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.252 [2024-11-26 01:04:29.984958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:07.252 [2024-11-26 01:04:29.984967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:07.252 [2024-11-26 01:04:29.984977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.252 [2024-11-26 01:04:29.985005] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:07.252 [2024-11-26 01:04:29.985526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.252 [2024-11-26 01:04:29.985550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:07.252 [2024-11-26 01:04:29.985560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:19:07.252 [2024-11-26 01:04:29.985569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.252 [2024-11-26 01:04:29.986093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.252 [2024-11-26 01:04:29.986110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:07.252 [2024-11-26 01:04:29.986119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:19:07.252 [2024-11-26 01:04:29.986126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.252 [2024-11-26 01:04:29.988826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.252 [2024-11-26 01:04:29.988943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:07.252 [2024-11-26 01:04:29.988958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.675 ms 00:19:07.252 [2024-11-26 01:04:29.988965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.252 [2024-11-26 01:04:29.994148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.252 [2024-11-26 01:04:29.994174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:07.252 [2024-11-26 01:04:29.994187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.141 ms 00:19:07.253 [2024-11-26 01:04:29.994192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.253 [2024-11-26 01:04:29.996026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.253 [2024-11-26 01:04:29.996124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:07.253 [2024-11-26 01:04:29.996139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:19:07.253 [2024-11-26 01:04:29.996145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.253 [2024-11-26 01:04:30.001026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.253 [2024-11-26 01:04:30.001053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:07.253 [2024-11-26 01:04:30.001063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.834 ms 00:19:07.253 [2024-11-26 01:04:30.001071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.253 [2024-11-26 01:04:30.001237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.253 [2024-11-26 01:04:30.001246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:07.253 [2024-11-26 01:04:30.001254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:07.253 [2024-11-26 01:04:30.001259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.253 [2024-11-26 01:04:30.003026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.253 [2024-11-26 01:04:30.003122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:07.253 [2024-11-26 01:04:30.003139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:19:07.253 [2024-11-26 01:04:30.003145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.253 [2024-11-26 01:04:30.004562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.253 [2024-11-26 01:04:30.004589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:07.253 [2024-11-26 01:04:30.004599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:19:07.253 [2024-11-26 01:04:30.004605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.253 [2024-11-26 01:04:30.005594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.253 [2024-11-26 01:04:30.005620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:07.253 [2024-11-26 01:04:30.005630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:19:07.253 [2024-11-26 01:04:30.005636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.253 [2024-11-26 01:04:30.006655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.253 [2024-11-26 01:04:30.006753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:07.253 [2024-11-26 01:04:30.006767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:19:07.253 [2024-11-26 01:04:30.006773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.253 [2024-11-26 01:04:30.006812] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:07.253 [2024-11-26 01:04:30.006823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.006998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:07.253 [2024-11-26 01:04:30.007286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:07.254 [2024-11-26 01:04:30.007566] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:07.254 [2024-11-26 01:04:30.007574] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d18485b-f600-4eee-9447-45835e109f8e 00:19:07.254 [2024-11-26 01:04:30.007581] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:07.254 [2024-11-26 01:04:30.007590] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:07.254 [2024-11-26 01:04:30.007596] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:07.254 [2024-11-26 01:04:30.007604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:07.254 [2024-11-26 01:04:30.007609] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:07.254 [2024-11-26 01:04:30.007617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:07.254 [2024-11-26 01:04:30.007624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:07.254 [2024-11-26 01:04:30.007631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:07.254 [2024-11-26 01:04:30.007636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:07.254 [2024-11-26 01:04:30.007644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.254 [2024-11-26 01:04:30.007650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:07.254 [2024-11-26 01:04:30.007660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:19:07.254 [2024-11-26 01:04:30.007666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.010079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.254 [2024-11-26 01:04:30.010116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:07.254 [2024-11-26 01:04:30.010128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:19:07.254 [2024-11-26 01:04:30.010136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.010244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:07.254 [2024-11-26 01:04:30.010253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:07.254 [2024-11-26 01:04:30.010262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:07.254 [2024-11-26 01:04:30.010269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.016494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.016598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:07.254 [2024-11-26 01:04:30.016664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.016684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.016781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.016806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:07.254 [2024-11-26 01:04:30.016873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.016896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.016967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.017012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:07.254 [2024-11-26 01:04:30.017031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.017078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.017137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.017161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:07.254 [2024-11-26 01:04:30.017180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.017264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.028879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.029010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:07.254 [2024-11-26 01:04:30.029054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.029133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.038650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.038788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:07.254 [2024-11-26 01:04:30.038808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.038817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.038901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.038914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:07.254 [2024-11-26 01:04:30.038923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.038930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.038978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.038986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:07.254 [2024-11-26 01:04:30.039008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.039015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.039108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.039117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:07.254 [2024-11-26 01:04:30.039128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.039135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.039179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.039188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:07.254 [2024-11-26 01:04:30.039199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.254 [2024-11-26 01:04:30.039206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.254 [2024-11-26 01:04:30.039278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.254 [2024-11-26 01:04:30.039287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:07.255 [2024-11-26 01:04:30.039297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.255 [2024-11-26 01:04:30.039304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.255 [2024-11-26 01:04:30.039360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:07.255 [2024-11-26 01:04:30.039370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:07.255 [2024-11-26 01:04:30.039381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:07.255 [2024-11-26 01:04:30.039389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:07.255 [2024-11-26 01:04:30.039566] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.608 ms, result 0 00:19:07.255 true 00:19:07.255 01:04:30 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89177 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89177 ']' 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89177 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89177 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89177' 00:19:07.255 killing process with pid 89177 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89177 00:19:07.255 01:04:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89177 00:19:12.524 01:04:34 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:13.096 65536+0 records in 00:19:13.096 65536+0 records out 00:19:13.096 268435456 bytes (268 MB, 256 MiB) copied, 1.09464 s, 245 MB/s 00:19:13.096 01:04:36 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:13.357 [2024-11-26 01:04:36.073784] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:19:13.357 [2024-11-26 01:04:36.074178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89342 ] 00:19:13.357 [2024-11-26 01:04:36.209672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:13.357 [2024-11-26 01:04:36.236168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.357 [2024-11-26 01:04:36.258818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.620 [2024-11-26 01:04:36.358295] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:13.620 [2024-11-26 01:04:36.358352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:13.620 [2024-11-26 01:04:36.512474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.620 [2024-11-26 01:04:36.512645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:13.620 [2024-11-26 01:04:36.512663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:13.620 [2024-11-26 01:04:36.512671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.620 [2024-11-26 01:04:36.514586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.620 [2024-11-26 01:04:36.514616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.620 [2024-11-26 01:04:36.514626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.898 ms 00:19:13.620 [2024-11-26 01:04:36.514632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.620 [2024-11-26 01:04:36.514691] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:13.620 [2024-11-26 01:04:36.514891] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:13.620 [2024-11-26 01:04:36.514903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.620 [2024-11-26 01:04:36.514910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.620 [2024-11-26 01:04:36.514917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:19:13.620 [2024-11-26 01:04:36.514923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.620 [2024-11-26 01:04:36.516204] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:13.620 [2024-11-26 01:04:36.518965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.620 [2024-11-26 01:04:36.519082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:13.620 [2024-11-26 01:04:36.519094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:19:13.620 [2024-11-26 01:04:36.519102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.620 [2024-11-26 01:04:36.519152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.620 [2024-11-26 01:04:36.519159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:13.620 [2024-11-26 01:04:36.519166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:13.620 [2024-11-26 01:04:36.519172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.620 [2024-11-26 01:04:36.525332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.620 [2024-11-26 01:04:36.525358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.620 [2024-11-26 01:04:36.525366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.128 ms 00:19:13.620 [2024-11-26 01:04:36.525375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.620 [2024-11-26 01:04:36.525459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.620 [2024-11-26 01:04:36.525467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.620 [2024-11-26 01:04:36.525474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:13.620 [2024-11-26 01:04:36.525481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.620 [2024-11-26 01:04:36.525502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.620 [2024-11-26 01:04:36.525509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:13.620 [2024-11-26 01:04:36.525515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:13.620 [2024-11-26 01:04:36.525521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.620 [2024-11-26 01:04:36.525540] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:13.620 [2024-11-26 01:04:36.527116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.620 [2024-11-26 01:04:36.527139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.621 [2024-11-26 01:04:36.527152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:19:13.621 [2024-11-26 01:04:36.527159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.621 [2024-11-26 01:04:36.527196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.621 [2024-11-26 01:04:36.527203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:13.621 [2024-11-26 01:04:36.527210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:13.621 [2024-11-26 01:04:36.527217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.621 [2024-11-26 01:04:36.527233] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:13.621 [2024-11-26 01:04:36.527250] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:13.621 [2024-11-26 01:04:36.527281] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:13.621 [2024-11-26 01:04:36.527297] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:13.621 [2024-11-26 01:04:36.527380] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:13.621 [2024-11-26 01:04:36.527391] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:13.621 [2024-11-26 01:04:36.527406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:13.621 [2024-11-26 01:04:36.527415] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527425] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527432] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:13.621 [2024-11-26 01:04:36.527440] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:13.621 [2024-11-26 01:04:36.527446] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:13.621 [2024-11-26 01:04:36.527458] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:13.621 [2024-11-26 01:04:36.527464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.621 [2024-11-26 01:04:36.527470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:13.621 [2024-11-26 01:04:36.527475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:19:13.621 [2024-11-26 01:04:36.527483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.621 [2024-11-26 01:04:36.527549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.621 [2024-11-26 01:04:36.527555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:13.621 [2024-11-26 01:04:36.527561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:13.621 [2024-11-26 01:04:36.527566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.621 [2024-11-26 01:04:36.527640] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:13.621 [2024-11-26 01:04:36.527648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:13.621 [2024-11-26 01:04:36.527654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:13.621 [2024-11-26 01:04:36.527680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:13.621 [2024-11-26 01:04:36.527699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:13.621 [2024-11-26 01:04:36.527709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:13.621 [2024-11-26 01:04:36.527715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:13.621 [2024-11-26 01:04:36.527721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:13.621 [2024-11-26 01:04:36.527726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:13.621 [2024-11-26 01:04:36.527732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:13.621 [2024-11-26 01:04:36.527737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:13.621 [2024-11-26 01:04:36.527748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:13.621 [2024-11-26 01:04:36.527763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:13.621 [2024-11-26 01:04:36.527782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:13.621 [2024-11-26 01:04:36.527796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:13.621 [2024-11-26 01:04:36.527812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:13.621 [2024-11-26 01:04:36.527827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:13.621 [2024-11-26 01:04:36.527837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:13.621 [2024-11-26 01:04:36.527856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:13.621 [2024-11-26 01:04:36.527863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:13.621 [2024-11-26 01:04:36.527868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:13.621 [2024-11-26 01:04:36.527874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:13.621 [2024-11-26 01:04:36.527880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:13.621 [2024-11-26 01:04:36.527890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:13.621 [2024-11-26 01:04:36.527896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527902] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:13.621 [2024-11-26 01:04:36.527908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:13.621 [2024-11-26 01:04:36.527914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:13.621 [2024-11-26 01:04:36.527926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:13.621 [2024-11-26 01:04:36.527932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:13.621 [2024-11-26 01:04:36.527938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:13.621 [2024-11-26 01:04:36.527943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:13.621 [2024-11-26 01:04:36.527948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:13.621 [2024-11-26 01:04:36.527954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:13.621 [2024-11-26 01:04:36.527960] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:13.621 [2024-11-26 01:04:36.527980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:13.621 [2024-11-26 01:04:36.527986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:13.621 [2024-11-26 01:04:36.527993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:13.621 [2024-11-26 01:04:36.527998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:13.621 [2024-11-26 01:04:36.528004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:13.621 [2024-11-26 01:04:36.528009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:13.621 [2024-11-26 01:04:36.528015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:13.621 [2024-11-26 01:04:36.528021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:13.621 [2024-11-26 01:04:36.528027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:13.621 [2024-11-26 01:04:36.528033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:13.622 [2024-11-26 01:04:36.528038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:13.622 [2024-11-26 01:04:36.528043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:13.622 [2024-11-26 01:04:36.528049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:13.622 [2024-11-26 01:04:36.528054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:13.622 [2024-11-26 01:04:36.528060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:13.622 [2024-11-26 01:04:36.528065] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:13.622 [2024-11-26 01:04:36.528074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:13.622 [2024-11-26 01:04:36.528080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:13.622 [2024-11-26 01:04:36.528086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:13.622 [2024-11-26 01:04:36.528091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:13.622 [2024-11-26 01:04:36.528096] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:13.622 [2024-11-26 01:04:36.528103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.622 [2024-11-26 01:04:36.528110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:13.622 [2024-11-26 01:04:36.528115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:19:13.622 [2024-11-26 01:04:36.528123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.539270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.539298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.884 [2024-11-26 01:04:36.539306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.103 ms 00:19:13.884 [2024-11-26 01:04:36.539313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.539411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.539420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:13.884 [2024-11-26 01:04:36.539427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:13.884 [2024-11-26 01:04:36.539433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.560896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.560937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.884 [2024-11-26 01:04:36.560952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.444 ms 00:19:13.884 [2024-11-26 01:04:36.560970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.561058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.561072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.884 [2024-11-26 01:04:36.561084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:13.884 [2024-11-26 01:04:36.561093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.561512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.561545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.884 [2024-11-26 01:04:36.561556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:19:13.884 [2024-11-26 01:04:36.561566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.561733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.561744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.884 [2024-11-26 01:04:36.561754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:13.884 [2024-11-26 01:04:36.561762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.569040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.569076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.884 [2024-11-26 01:04:36.569087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.253 ms 00:19:13.884 [2024-11-26 01:04:36.569097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.572297] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:13.884 [2024-11-26 01:04:36.572459] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:13.884 [2024-11-26 01:04:36.572474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.572483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:13.884 [2024-11-26 01:04:36.572491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:19:13.884 [2024-11-26 01:04:36.572498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.587427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.587553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:13.884 [2024-11-26 01:04:36.587569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.889 ms 00:19:13.884 [2024-11-26 01:04:36.587577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.589771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.589802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:13.884 [2024-11-26 01:04:36.589811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.130 ms 00:19:13.884 [2024-11-26 01:04:36.589819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.591526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.591554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:13.884 [2024-11-26 01:04:36.591563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:19:13.884 [2024-11-26 01:04:36.591570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.591913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.591929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:13.884 [2024-11-26 01:04:36.591938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:13.884 [2024-11-26 01:04:36.591945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.611720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.611756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:13.884 [2024-11-26 01:04:36.611768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.754 ms 00:19:13.884 [2024-11-26 01:04:36.611776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.619592] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:13.884 [2024-11-26 01:04:36.636827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.636872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:13.884 [2024-11-26 01:04:36.636884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.939 ms 00:19:13.884 [2024-11-26 01:04:36.636892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.636975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.636986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:13.884 [2024-11-26 01:04:36.636996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:13.884 [2024-11-26 01:04:36.637006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.637062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.637071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:13.884 [2024-11-26 01:04:36.637080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:13.884 [2024-11-26 01:04:36.637088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.637112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.637121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:13.884 [2024-11-26 01:04:36.637134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:13.884 [2024-11-26 01:04:36.637142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.637181] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:13.884 [2024-11-26 01:04:36.637192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.637200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:13.884 [2024-11-26 01:04:36.637208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:13.884 [2024-11-26 01:04:36.637216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.641901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.641940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:13.884 [2024-11-26 01:04:36.641954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.665 ms 00:19:13.884 [2024-11-26 01:04:36.641962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.642044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.884 [2024-11-26 01:04:36.642054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:13.884 [2024-11-26 01:04:36.642085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:13.884 [2024-11-26 01:04:36.642094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.884 [2024-11-26 01:04:36.643098] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:13.884 [2024-11-26 01:04:36.644136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.302 ms, result 0 00:19:13.884 [2024-11-26 01:04:36.645667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:13.884 [2024-11-26 01:04:36.653384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:14.830  [2024-11-26T01:04:38.689Z] Copying: 18/256 [MB] (18 MBps) [2024-11-26T01:04:40.073Z] Copying: 37/256 [MB] (18 MBps) [2024-11-26T01:04:40.664Z] Copying: 60/256 [MB] (23 MBps) [2024-11-26T01:04:42.048Z] Copying: 83/256 [MB] (22 MBps) [2024-11-26T01:04:42.993Z] Copying: 100/256 [MB] (17 MBps) [2024-11-26T01:04:43.938Z] Copying: 120/256 [MB] (19 MBps) [2024-11-26T01:04:44.884Z] Copying: 135/256 [MB] (15 MBps) [2024-11-26T01:04:45.829Z] Copying: 156/256 [MB] (20 MBps) [2024-11-26T01:04:46.775Z] Copying: 178/256 [MB] (22 MBps) [2024-11-26T01:04:47.720Z] Copying: 196/256 [MB] (17 MBps) [2024-11-26T01:04:48.664Z] Copying: 210/256 [MB] (14 MBps) [2024-11-26T01:04:49.667Z] Copying: 222/256 [MB] (11 MBps) [2024-11-26T01:04:51.055Z] Copying: 233/256 [MB] (10 MBps) [2024-11-26T01:04:52.000Z] Copying: 243/256 [MB] (10 MBps) [2024-11-26T01:04:52.000Z] Copying: 253/256 [MB] (10 MBps) [2024-11-26T01:04:52.000Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-26 01:04:51.852625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:29.084 [2024-11-26 01:04:51.854424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.854481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:29.084 [2024-11-26 01:04:51.854495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:29.084 [2024-11-26 01:04:51.854504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.854527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:29.084 [2024-11-26 01:04:51.855151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.855182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:29.084 [2024-11-26 01:04:51.855194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:19:29.084 [2024-11-26 01:04:51.855201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.858361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.858416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:29.084 [2024-11-26 01:04:51.858429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.132 ms 00:19:29.084 [2024-11-26 01:04:51.858446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.866490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.866678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:29.084 [2024-11-26 01:04:51.866698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.024 ms 00:19:29.084 [2024-11-26 01:04:51.866706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.873692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.873871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:29.084 [2024-11-26 01:04:51.873891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.933 ms 00:19:29.084 [2024-11-26 01:04:51.873908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.876642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.876704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:29.084 [2024-11-26 01:04:51.876714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:19:29.084 [2024-11-26 01:04:51.876722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.881610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.881661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:29.084 [2024-11-26 01:04:51.881674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.843 ms 00:19:29.084 [2024-11-26 01:04:51.881682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.881814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.881825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:29.084 [2024-11-26 01:04:51.881834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:29.084 [2024-11-26 01:04:51.881870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.884488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.884661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:29.084 [2024-11-26 01:04:51.884678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:19:29.084 [2024-11-26 01:04:51.884686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.887113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.887161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:29.084 [2024-11-26 01:04:51.887171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.388 ms 00:19:29.084 [2024-11-26 01:04:51.887177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.889462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.889508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:29.084 [2024-11-26 01:04:51.889518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.242 ms 00:19:29.084 [2024-11-26 01:04:51.889524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.891729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.084 [2024-11-26 01:04:51.891901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:29.084 [2024-11-26 01:04:51.891917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:19:29.084 [2024-11-26 01:04:51.891925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.084 [2024-11-26 01:04:51.891962] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:29.084 [2024-11-26 01:04:51.891977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.891987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.891995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:29.084 [2024-11-26 01:04:51.892122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:29.085 [2024-11-26 01:04:51.892586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:29.086 [2024-11-26 01:04:51.892747] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:29.086 [2024-11-26 01:04:51.892756] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d18485b-f600-4eee-9447-45835e109f8e 00:19:29.086 [2024-11-26 01:04:51.892765] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:29.086 [2024-11-26 01:04:51.892773] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:29.086 [2024-11-26 01:04:51.892780] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:29.086 [2024-11-26 01:04:51.892788] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:29.086 [2024-11-26 01:04:51.892796] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:29.086 [2024-11-26 01:04:51.892804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:29.086 [2024-11-26 01:04:51.892816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:29.086 [2024-11-26 01:04:51.892822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:29.086 [2024-11-26 01:04:51.892829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:29.086 [2024-11-26 01:04:51.892836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.086 [2024-11-26 01:04:51.892857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:29.086 [2024-11-26 01:04:51.892867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:19:29.086 [2024-11-26 01:04:51.892874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.895104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.086 [2024-11-26 01:04:51.895169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:29.086 [2024-11-26 01:04:51.895182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:19:29.086 [2024-11-26 01:04:51.895192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.895337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:29.086 [2024-11-26 01:04:51.895347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:29.086 [2024-11-26 01:04:51.895356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:29.086 [2024-11-26 01:04:51.895363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.903259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.086 [2024-11-26 01:04:51.903306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:29.086 [2024-11-26 01:04:51.903317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.086 [2024-11-26 01:04:51.903332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.903402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.086 [2024-11-26 01:04:51.903411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:29.086 [2024-11-26 01:04:51.903419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.086 [2024-11-26 01:04:51.903426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.903470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.086 [2024-11-26 01:04:51.903481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:29.086 [2024-11-26 01:04:51.903489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.086 [2024-11-26 01:04:51.903496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.903516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.086 [2024-11-26 01:04:51.903524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:29.086 [2024-11-26 01:04:51.903531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.086 [2024-11-26 01:04:51.903539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.916225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.086 [2024-11-26 01:04:51.916283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:29.086 [2024-11-26 01:04:51.916298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.086 [2024-11-26 01:04:51.916306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.926275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.086 [2024-11-26 01:04:51.926467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:29.086 [2024-11-26 01:04:51.926485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.086 [2024-11-26 01:04:51.926494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.926545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.086 [2024-11-26 01:04:51.926555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:29.086 [2024-11-26 01:04:51.926563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.086 [2024-11-26 01:04:51.926571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.926611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.086 [2024-11-26 01:04:51.926623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:29.086 [2024-11-26 01:04:51.926631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.086 [2024-11-26 01:04:51.926640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.086 [2024-11-26 01:04:51.926715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.086 [2024-11-26 01:04:51.926726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:29.087 [2024-11-26 01:04:51.926734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.087 [2024-11-26 01:04:51.926742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.087 [2024-11-26 01:04:51.926773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.087 [2024-11-26 01:04:51.926783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:29.087 [2024-11-26 01:04:51.926795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.087 [2024-11-26 01:04:51.926803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.087 [2024-11-26 01:04:51.926910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.087 [2024-11-26 01:04:51.926921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:29.087 [2024-11-26 01:04:51.926930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.087 [2024-11-26 01:04:51.926937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.087 [2024-11-26 01:04:51.926988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:29.087 [2024-11-26 01:04:51.927002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:29.087 [2024-11-26 01:04:51.927011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:29.087 [2024-11-26 01:04:51.927019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:29.087 [2024-11-26 01:04:51.927170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 72.727 ms, result 0 00:19:29.659 00:19:29.659 00:19:29.659 01:04:52 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89512 00:19:29.659 01:04:52 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:29.659 01:04:52 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89512 00:19:29.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.659 01:04:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89512 ']' 00:19:29.659 01:04:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.659 01:04:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.659 01:04:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.659 01:04:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.659 01:04:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:29.659 [2024-11-26 01:04:52.466786] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:19:29.659 [2024-11-26 01:04:52.466944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89512 ] 00:19:29.921 [2024-11-26 01:04:52.602038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:29.921 [2024-11-26 01:04:52.632690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.921 [2024-11-26 01:04:52.662052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.493 01:04:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:30.493 01:04:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:30.493 01:04:53 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:30.755 [2024-11-26 01:04:53.511976] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:30.755 [2024-11-26 01:04:53.512053] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:31.017 [2024-11-26 01:04:53.689430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.689488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:31.017 [2024-11-26 01:04:53.689505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:31.017 [2024-11-26 01:04:53.689518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.692070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.692118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.017 [2024-11-26 01:04:53.692130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.524 ms 00:19:31.017 [2024-11-26 01:04:53.692138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.692245] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:31.017 [2024-11-26 01:04:53.692613] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:31.017 [2024-11-26 01:04:53.692659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.692668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.017 [2024-11-26 01:04:53.692679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:19:31.017 [2024-11-26 01:04:53.692688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.694435] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:31.017 [2024-11-26 01:04:53.698141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.698199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:31.017 [2024-11-26 01:04:53.698210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.714 ms 00:19:31.017 [2024-11-26 01:04:53.698220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.698322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.698338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:31.017 [2024-11-26 01:04:53.698348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:31.017 [2024-11-26 01:04:53.698361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.706364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.706413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.017 [2024-11-26 01:04:53.706423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.953 ms 00:19:31.017 [2024-11-26 01:04:53.706432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.706546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.706559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.017 [2024-11-26 01:04:53.706572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:31.017 [2024-11-26 01:04:53.706582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.706617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.706628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:31.017 [2024-11-26 01:04:53.706636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:31.017 [2024-11-26 01:04:53.706646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.706670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:31.017 [2024-11-26 01:04:53.708677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.708728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.017 [2024-11-26 01:04:53.708743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:19:31.017 [2024-11-26 01:04:53.708751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.708791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.708799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:31.017 [2024-11-26 01:04:53.708809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:31.017 [2024-11-26 01:04:53.708817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.708857] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:31.017 [2024-11-26 01:04:53.708879] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:31.017 [2024-11-26 01:04:53.708926] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:31.017 [2024-11-26 01:04:53.708943] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:31.017 [2024-11-26 01:04:53.709056] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:31.017 [2024-11-26 01:04:53.709067] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:31.017 [2024-11-26 01:04:53.709080] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:31.017 [2024-11-26 01:04:53.709092] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:31.017 [2024-11-26 01:04:53.709109] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:31.017 [2024-11-26 01:04:53.709117] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:31.017 [2024-11-26 01:04:53.709127] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:31.017 [2024-11-26 01:04:53.709137] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:31.017 [2024-11-26 01:04:53.709148] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:31.017 [2024-11-26 01:04:53.709156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.709166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:31.017 [2024-11-26 01:04:53.709177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:31.017 [2024-11-26 01:04:53.709186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.709278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.017 [2024-11-26 01:04:53.709288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:31.017 [2024-11-26 01:04:53.709299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:31.017 [2024-11-26 01:04:53.709308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.017 [2024-11-26 01:04:53.709412] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:31.017 [2024-11-26 01:04:53.709431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:31.017 [2024-11-26 01:04:53.709439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.017 [2024-11-26 01:04:53.709450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.017 [2024-11-26 01:04:53.709459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:31.017 [2024-11-26 01:04:53.709468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:31.017 [2024-11-26 01:04:53.709474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:31.017 [2024-11-26 01:04:53.709483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:31.017 [2024-11-26 01:04:53.709496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:31.017 [2024-11-26 01:04:53.709505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.017 [2024-11-26 01:04:53.709512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:31.017 [2024-11-26 01:04:53.709521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:31.017 [2024-11-26 01:04:53.709527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:31.017 [2024-11-26 01:04:53.709535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:31.017 [2024-11-26 01:04:53.709542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:31.017 [2024-11-26 01:04:53.709551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.017 [2024-11-26 01:04:53.709558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:31.017 [2024-11-26 01:04:53.709569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:31.017 [2024-11-26 01:04:53.709576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.017 [2024-11-26 01:04:53.709587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:31.017 [2024-11-26 01:04:53.709594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:31.017 [2024-11-26 01:04:53.709603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.017 [2024-11-26 01:04:53.709609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:31.017 [2024-11-26 01:04:53.709619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:31.017 [2024-11-26 01:04:53.709626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.017 [2024-11-26 01:04:53.709635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:31.017 [2024-11-26 01:04:53.709642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:31.017 [2024-11-26 01:04:53.709651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.018 [2024-11-26 01:04:53.709657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:31.018 [2024-11-26 01:04:53.709666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:31.018 [2024-11-26 01:04:53.709672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:31.018 [2024-11-26 01:04:53.709681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:31.018 [2024-11-26 01:04:53.709688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:31.018 [2024-11-26 01:04:53.709696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.018 [2024-11-26 01:04:53.709703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:31.018 [2024-11-26 01:04:53.709713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:31.018 [2024-11-26 01:04:53.709720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:31.018 [2024-11-26 01:04:53.709728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:31.018 [2024-11-26 01:04:53.709735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:31.018 [2024-11-26 01:04:53.709744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.018 [2024-11-26 01:04:53.709750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:31.018 [2024-11-26 01:04:53.709759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:31.018 [2024-11-26 01:04:53.709765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.018 [2024-11-26 01:04:53.709774] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:31.018 [2024-11-26 01:04:53.709783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:31.018 [2024-11-26 01:04:53.709792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:31.018 [2024-11-26 01:04:53.709799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:31.018 [2024-11-26 01:04:53.709808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:31.018 [2024-11-26 01:04:53.709814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:31.018 [2024-11-26 01:04:53.709823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:31.018 [2024-11-26 01:04:53.709831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:31.018 [2024-11-26 01:04:53.709864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:31.018 [2024-11-26 01:04:53.709872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:31.018 [2024-11-26 01:04:53.709882] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:31.018 [2024-11-26 01:04:53.709891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.018 [2024-11-26 01:04:53.709904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:31.018 [2024-11-26 01:04:53.709912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:31.018 [2024-11-26 01:04:53.709922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:31.018 [2024-11-26 01:04:53.709930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:31.018 [2024-11-26 01:04:53.709939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:31.018 [2024-11-26 01:04:53.709946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:31.018 [2024-11-26 01:04:53.709956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:31.018 [2024-11-26 01:04:53.709963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:31.018 [2024-11-26 01:04:53.709972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:31.018 [2024-11-26 01:04:53.709979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:31.018 [2024-11-26 01:04:53.709989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:31.018 [2024-11-26 01:04:53.709997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:31.018 [2024-11-26 01:04:53.710008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:31.018 [2024-11-26 01:04:53.710016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:31.018 [2024-11-26 01:04:53.710025] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:31.018 [2024-11-26 01:04:53.710034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:31.018 [2024-11-26 01:04:53.710048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:31.018 [2024-11-26 01:04:53.710069] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:31.018 [2024-11-26 01:04:53.710080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:31.018 [2024-11-26 01:04:53.710087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:31.018 [2024-11-26 01:04:53.710098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.710106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:31.018 [2024-11-26 01:04:53.710117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.753 ms 00:19:31.018 [2024-11-26 01:04:53.710125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.724312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.724358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.018 [2024-11-26 01:04:53.724372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.126 ms 00:19:31.018 [2024-11-26 01:04:53.724383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.724515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.724526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:31.018 [2024-11-26 01:04:53.724538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:31.018 [2024-11-26 01:04:53.724546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.737146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.737190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.018 [2024-11-26 01:04:53.737206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.575 ms 00:19:31.018 [2024-11-26 01:04:53.737213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.737278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.737288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.018 [2024-11-26 01:04:53.737299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:31.018 [2024-11-26 01:04:53.737311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.737866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.737902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.018 [2024-11-26 01:04:53.737916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:19:31.018 [2024-11-26 01:04:53.737925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.738107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.738117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.018 [2024-11-26 01:04:53.738128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:31.018 [2024-11-26 01:04:53.738135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.746407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.746452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.018 [2024-11-26 01:04:53.746465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.243 ms 00:19:31.018 [2024-11-26 01:04:53.746472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.750293] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:31.018 [2024-11-26 01:04:53.750341] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:31.018 [2024-11-26 01:04:53.750356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.750364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:31.018 [2024-11-26 01:04:53.750374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.770 ms 00:19:31.018 [2024-11-26 01:04:53.750382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.766112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.018 [2024-11-26 01:04:53.766159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:31.018 [2024-11-26 01:04:53.766176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.666 ms 00:19:31.018 [2024-11-26 01:04:53.766185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.018 [2024-11-26 01:04:53.769150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.769196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:31.019 [2024-11-26 01:04:53.769208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.864 ms 00:19:31.019 [2024-11-26 01:04:53.769216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.771814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.771877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:31.019 [2024-11-26 01:04:53.771889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.541 ms 00:19:31.019 [2024-11-26 01:04:53.771896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.772247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.772271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:31.019 [2024-11-26 01:04:53.772283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:19:31.019 [2024-11-26 01:04:53.772291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.806173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.806240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:31.019 [2024-11-26 01:04:53.806260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.844 ms 00:19:31.019 [2024-11-26 01:04:53.806273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.814356] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:31.019 [2024-11-26 01:04:53.833111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.833167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:31.019 [2024-11-26 01:04:53.833187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.737 ms 00:19:31.019 [2024-11-26 01:04:53.833197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.833292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.833305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:31.019 [2024-11-26 01:04:53.833316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:31.019 [2024-11-26 01:04:53.833326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.833384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.833396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:31.019 [2024-11-26 01:04:53.833405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:31.019 [2024-11-26 01:04:53.833414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.833440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.833461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:31.019 [2024-11-26 01:04:53.833469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:31.019 [2024-11-26 01:04:53.833478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.833513] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:31.019 [2024-11-26 01:04:53.833525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.833533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:31.019 [2024-11-26 01:04:53.833543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:31.019 [2024-11-26 01:04:53.833555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.839704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.839755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:31.019 [2024-11-26 01:04:53.839772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.119 ms 00:19:31.019 [2024-11-26 01:04:53.839780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.839893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.019 [2024-11-26 01:04:53.839904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:31.019 [2024-11-26 01:04:53.839915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:31.019 [2024-11-26 01:04:53.839924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.019 [2024-11-26 01:04:53.841102] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:31.019 [2024-11-26 01:04:53.842483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.339 ms, result 0 00:19:31.019 [2024-11-26 01:04:53.844353] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:31.019 Some configs were skipped because the RPC state that can call them passed over. 00:19:31.019 01:04:53 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:31.281 [2024-11-26 01:04:54.070042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.281 [2024-11-26 01:04:54.070118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:31.281 [2024-11-26 01:04:54.070132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.945 ms 00:19:31.281 [2024-11-26 01:04:54.070143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.281 [2024-11-26 01:04:54.070181] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.090 ms, result 0 00:19:31.281 true 00:19:31.281 01:04:54 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:31.543 [2024-11-26 01:04:54.286201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.543 [2024-11-26 01:04:54.286254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:31.543 [2024-11-26 01:04:54.286268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.844 ms 00:19:31.543 [2024-11-26 01:04:54.286276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.543 [2024-11-26 01:04:54.286318] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.966 ms, result 0 00:19:31.543 true 00:19:31.543 01:04:54 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89512 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89512 ']' 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89512 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89512 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.543 killing process with pid 89512 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89512' 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89512 00:19:31.543 01:04:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89512 00:19:31.543 [2024-11-26 01:04:54.446276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.543 [2024-11-26 01:04:54.446336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:31.543 [2024-11-26 01:04:54.446354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:31.543 [2024-11-26 01:04:54.446366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.543 [2024-11-26 01:04:54.446388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:31.543 [2024-11-26 01:04:54.446917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.543 [2024-11-26 01:04:54.446951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:31.543 [2024-11-26 01:04:54.446962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:19:31.543 [2024-11-26 01:04:54.446969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.543 [2024-11-26 01:04:54.447267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.543 [2024-11-26 01:04:54.447279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:31.543 [2024-11-26 01:04:54.447289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:19:31.543 [2024-11-26 01:04:54.447298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.543 [2024-11-26 01:04:54.451790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.543 [2024-11-26 01:04:54.451830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:31.543 [2024-11-26 01:04:54.451851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.468 ms 00:19:31.543 [2024-11-26 01:04:54.451861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.806 [2024-11-26 01:04:54.458825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.806 [2024-11-26 01:04:54.458865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:31.806 [2024-11-26 01:04:54.458880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.924 ms 00:19:31.806 [2024-11-26 01:04:54.458889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.806 [2024-11-26 01:04:54.461548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.806 [2024-11-26 01:04:54.461585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:31.806 [2024-11-26 01:04:54.461596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:19:31.806 [2024-11-26 01:04:54.461603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.806 [2024-11-26 01:04:54.465539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.806 [2024-11-26 01:04:54.465577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:31.806 [2024-11-26 01:04:54.465596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.893 ms 00:19:31.806 [2024-11-26 01:04:54.465603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.806 [2024-11-26 01:04:54.465734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.806 [2024-11-26 01:04:54.465745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:31.806 [2024-11-26 01:04:54.465756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:31.806 [2024-11-26 01:04:54.465768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.806 [2024-11-26 01:04:54.468502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.806 [2024-11-26 01:04:54.468540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:31.806 [2024-11-26 01:04:54.468555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.711 ms 00:19:31.806 [2024-11-26 01:04:54.468562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.806 [2024-11-26 01:04:54.470744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.806 [2024-11-26 01:04:54.470777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:31.806 [2024-11-26 01:04:54.470788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.140 ms 00:19:31.806 [2024-11-26 01:04:54.470795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.806 [2024-11-26 01:04:54.472717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.806 [2024-11-26 01:04:54.472765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:31.806 [2024-11-26 01:04:54.472777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.879 ms 00:19:31.806 [2024-11-26 01:04:54.472784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.806 [2024-11-26 01:04:54.474586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.806 [2024-11-26 01:04:54.474620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:31.806 [2024-11-26 01:04:54.474631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:19:31.806 [2024-11-26 01:04:54.474637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.806 [2024-11-26 01:04:54.474672] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:31.806 [2024-11-26 01:04:54.474686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.474995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:31.806 [2024-11-26 01:04:54.475107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:31.807 [2024-11-26 01:04:54.475568] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:31.807 [2024-11-26 01:04:54.475578] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d18485b-f600-4eee-9447-45835e109f8e 00:19:31.807 [2024-11-26 01:04:54.475589] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:31.807 [2024-11-26 01:04:54.475598] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:31.807 [2024-11-26 01:04:54.475605] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:31.807 [2024-11-26 01:04:54.475614] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:31.807 [2024-11-26 01:04:54.475624] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:31.807 [2024-11-26 01:04:54.475634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:31.807 [2024-11-26 01:04:54.475642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:31.807 [2024-11-26 01:04:54.475650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:31.807 [2024-11-26 01:04:54.475656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:31.807 [2024-11-26 01:04:54.475665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.807 [2024-11-26 01:04:54.475676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:31.807 [2024-11-26 01:04:54.475688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:19:31.808 [2024-11-26 01:04:54.475696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.477374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.808 [2024-11-26 01:04:54.477407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:31.808 [2024-11-26 01:04:54.477419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:19:31.808 [2024-11-26 01:04:54.477426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.477514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.808 [2024-11-26 01:04:54.477523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:31.808 [2024-11-26 01:04:54.477533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:31.808 [2024-11-26 01:04:54.477542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.483601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.483636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.808 [2024-11-26 01:04:54.483648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.483656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.483741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.483750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.808 [2024-11-26 01:04:54.483762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.483773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.483814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.483823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.808 [2024-11-26 01:04:54.483832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.483861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.483880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.483888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.808 [2024-11-26 01:04:54.483897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.483903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.494829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.494900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.808 [2024-11-26 01:04:54.494913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.494921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.503113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.503154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.808 [2024-11-26 01:04:54.503168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.503176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.503222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.503231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.808 [2024-11-26 01:04:54.503241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.503249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.503281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.503289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.808 [2024-11-26 01:04:54.503299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.503306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.503372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.503384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.808 [2024-11-26 01:04:54.503393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.503401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.503436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.503445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:31.808 [2024-11-26 01:04:54.503456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.503463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.503503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.503514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.808 [2024-11-26 01:04:54.503524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.503532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.503577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.808 [2024-11-26 01:04:54.503586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.808 [2024-11-26 01:04:54.503596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.808 [2024-11-26 01:04:54.503604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.808 [2024-11-26 01:04:54.503744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.438 ms, result 0 00:19:31.808 01:04:54 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:31.808 01:04:54 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:32.070 [2024-11-26 01:04:54.765855] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:19:32.070 [2024-11-26 01:04:54.765994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89549 ] 00:19:32.070 [2024-11-26 01:04:54.899939] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:32.070 [2024-11-26 01:04:54.922744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.070 [2024-11-26 01:04:54.951214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.331 [2024-11-26 01:04:55.067750] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.331 [2024-11-26 01:04:55.067827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:32.331 [2024-11-26 01:04:55.228651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.331 [2024-11-26 01:04:55.228711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:32.331 [2024-11-26 01:04:55.228725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:32.331 [2024-11-26 01:04:55.228735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.331 [2024-11-26 01:04:55.231738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.331 [2024-11-26 01:04:55.231799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.331 [2024-11-26 01:04:55.231811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.981 ms 00:19:32.331 [2024-11-26 01:04:55.231824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.331 [2024-11-26 01:04:55.231975] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:32.331 [2024-11-26 01:04:55.232390] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:32.331 [2024-11-26 01:04:55.232437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.331 [2024-11-26 01:04:55.232447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.331 [2024-11-26 01:04:55.232458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.473 ms 00:19:32.331 [2024-11-26 01:04:55.232467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.331 [2024-11-26 01:04:55.234270] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:32.331 [2024-11-26 01:04:55.238202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.331 [2024-11-26 01:04:55.238253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:32.331 [2024-11-26 01:04:55.238264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.934 ms 00:19:32.331 [2024-11-26 01:04:55.238272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.331 [2024-11-26 01:04:55.238364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.331 [2024-11-26 01:04:55.238375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:32.331 [2024-11-26 01:04:55.238385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:32.331 [2024-11-26 01:04:55.238393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.595 [2024-11-26 01:04:55.246950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.595 [2024-11-26 01:04:55.246989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.595 [2024-11-26 01:04:55.247001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.508 ms 00:19:32.595 [2024-11-26 01:04:55.247020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.595 [2024-11-26 01:04:55.247158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.595 [2024-11-26 01:04:55.247170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.595 [2024-11-26 01:04:55.247180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:32.595 [2024-11-26 01:04:55.247195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.595 [2024-11-26 01:04:55.247223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.595 [2024-11-26 01:04:55.247231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:32.595 [2024-11-26 01:04:55.247240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:32.595 [2024-11-26 01:04:55.247249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.595 [2024-11-26 01:04:55.247271] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:32.595 [2024-11-26 01:04:55.249311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.595 [2024-11-26 01:04:55.249349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.595 [2024-11-26 01:04:55.249365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.046 ms 00:19:32.595 [2024-11-26 01:04:55.249377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.595 [2024-11-26 01:04:55.249426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.595 [2024-11-26 01:04:55.249435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:32.595 [2024-11-26 01:04:55.249445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:32.595 [2024-11-26 01:04:55.249458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.595 [2024-11-26 01:04:55.249482] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:32.595 [2024-11-26 01:04:55.249504] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:32.595 [2024-11-26 01:04:55.249546] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:32.595 [2024-11-26 01:04:55.249567] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:32.595 [2024-11-26 01:04:55.249674] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:32.595 [2024-11-26 01:04:55.249687] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:32.595 [2024-11-26 01:04:55.249703] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:32.595 [2024-11-26 01:04:55.249716] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:32.595 [2024-11-26 01:04:55.249727] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:32.595 [2024-11-26 01:04:55.249736] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:32.595 [2024-11-26 01:04:55.249745] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:32.595 [2024-11-26 01:04:55.249756] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:32.595 [2024-11-26 01:04:55.249767] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:32.595 [2024-11-26 01:04:55.249777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.595 [2024-11-26 01:04:55.249786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:32.595 [2024-11-26 01:04:55.249796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:19:32.595 [2024-11-26 01:04:55.249805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.595 [2024-11-26 01:04:55.249925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.595 [2024-11-26 01:04:55.249937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:32.595 [2024-11-26 01:04:55.249947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:32.595 [2024-11-26 01:04:55.249959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.595 [2024-11-26 01:04:55.250089] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:32.595 [2024-11-26 01:04:55.250173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:32.595 [2024-11-26 01:04:55.250183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.595 [2024-11-26 01:04:55.250197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:32.595 [2024-11-26 01:04:55.250222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:32.595 [2024-11-26 01:04:55.250242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:32.595 [2024-11-26 01:04:55.250250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.595 [2024-11-26 01:04:55.250267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:32.595 [2024-11-26 01:04:55.250275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:32.595 [2024-11-26 01:04:55.250283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:32.595 [2024-11-26 01:04:55.250292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:32.595 [2024-11-26 01:04:55.250300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:32.595 [2024-11-26 01:04:55.250308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:32.595 [2024-11-26 01:04:55.250324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:32.595 [2024-11-26 01:04:55.250332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:32.595 [2024-11-26 01:04:55.250350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.595 [2024-11-26 01:04:55.250371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:32.595 [2024-11-26 01:04:55.250379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.595 [2024-11-26 01:04:55.250394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:32.595 [2024-11-26 01:04:55.250402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.595 [2024-11-26 01:04:55.250420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:32.595 [2024-11-26 01:04:55.250428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:32.595 [2024-11-26 01:04:55.250443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:32.595 [2024-11-26 01:04:55.250450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.595 [2024-11-26 01:04:55.250465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:32.595 [2024-11-26 01:04:55.250474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:32.595 [2024-11-26 01:04:55.250481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:32.595 [2024-11-26 01:04:55.250489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:32.595 [2024-11-26 01:04:55.250499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:32.595 [2024-11-26 01:04:55.250506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:32.595 [2024-11-26 01:04:55.250522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:32.595 [2024-11-26 01:04:55.250531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.595 [2024-11-26 01:04:55.250538] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:32.595 [2024-11-26 01:04:55.250547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:32.595 [2024-11-26 01:04:55.250555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:32.595 [2024-11-26 01:04:55.250564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:32.596 [2024-11-26 01:04:55.250574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:32.596 [2024-11-26 01:04:55.250582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:32.596 [2024-11-26 01:04:55.250589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:32.596 [2024-11-26 01:04:55.250596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:32.596 [2024-11-26 01:04:55.250604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:32.596 [2024-11-26 01:04:55.250612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:32.596 [2024-11-26 01:04:55.250622] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:32.596 [2024-11-26 01:04:55.250635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.596 [2024-11-26 01:04:55.250645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:32.596 [2024-11-26 01:04:55.250653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:32.596 [2024-11-26 01:04:55.250661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:32.596 [2024-11-26 01:04:55.250670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:32.596 [2024-11-26 01:04:55.250679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:32.596 [2024-11-26 01:04:55.250687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:32.596 [2024-11-26 01:04:55.250696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:32.596 [2024-11-26 01:04:55.250704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:32.596 [2024-11-26 01:04:55.250713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:32.596 [2024-11-26 01:04:55.250721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:32.596 [2024-11-26 01:04:55.250729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:32.596 [2024-11-26 01:04:55.250738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:32.596 [2024-11-26 01:04:55.250747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:32.596 [2024-11-26 01:04:55.250756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:32.596 [2024-11-26 01:04:55.250765] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:32.596 [2024-11-26 01:04:55.250779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:32.596 [2024-11-26 01:04:55.250789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:32.596 [2024-11-26 01:04:55.250800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:32.596 [2024-11-26 01:04:55.250808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:32.596 [2024-11-26 01:04:55.250817] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:32.596 [2024-11-26 01:04:55.250825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.250839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:32.596 [2024-11-26 01:04:55.250864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:19:32.596 [2024-11-26 01:04:55.250873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.264655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.264700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.596 [2024-11-26 01:04:55.264713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.726 ms 00:19:32.596 [2024-11-26 01:04:55.264729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.264881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.264893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:32.596 [2024-11-26 01:04:55.264902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:32.596 [2024-11-26 01:04:55.264910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.286837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.286922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.596 [2024-11-26 01:04:55.286940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.900 ms 00:19:32.596 [2024-11-26 01:04:55.286957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.287086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.287105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.596 [2024-11-26 01:04:55.287119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:32.596 [2024-11-26 01:04:55.287132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.287689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.287737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.596 [2024-11-26 01:04:55.287751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:19:32.596 [2024-11-26 01:04:55.287762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.287991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.288007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.596 [2024-11-26 01:04:55.288019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:19:32.596 [2024-11-26 01:04:55.288030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.297061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.297117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.596 [2024-11-26 01:04:55.297137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.998 ms 00:19:32.596 [2024-11-26 01:04:55.297148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.301150] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:32.596 [2024-11-26 01:04:55.301200] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:32.596 [2024-11-26 01:04:55.301213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.301222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:32.596 [2024-11-26 01:04:55.301231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:19:32.596 [2024-11-26 01:04:55.301239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.317288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.317344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:32.596 [2024-11-26 01:04:55.317358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.965 ms 00:19:32.596 [2024-11-26 01:04:55.317367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.320339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.320387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:32.596 [2024-11-26 01:04:55.320398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.858 ms 00:19:32.596 [2024-11-26 01:04:55.320406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.323033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.323078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:32.596 [2024-11-26 01:04:55.323088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.533 ms 00:19:32.596 [2024-11-26 01:04:55.323097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.323444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.323457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:32.596 [2024-11-26 01:04:55.323467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:19:32.596 [2024-11-26 01:04:55.323481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.346139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.346199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:32.596 [2024-11-26 01:04:55.346212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:19:32.596 [2024-11-26 01:04:55.346221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.596 [2024-11-26 01:04:55.354748] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:32.596 [2024-11-26 01:04:55.374243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.596 [2024-11-26 01:04:55.374295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:32.596 [2024-11-26 01:04:55.374308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.913 ms 00:19:32.596 [2024-11-26 01:04:55.374316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.597 [2024-11-26 01:04:55.374415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.597 [2024-11-26 01:04:55.374429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:32.597 [2024-11-26 01:04:55.374439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:32.597 [2024-11-26 01:04:55.374449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.597 [2024-11-26 01:04:55.374506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.597 [2024-11-26 01:04:55.374516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:32.597 [2024-11-26 01:04:55.374525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:32.597 [2024-11-26 01:04:55.374533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.597 [2024-11-26 01:04:55.374563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.597 [2024-11-26 01:04:55.374572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:32.597 [2024-11-26 01:04:55.374583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:32.597 [2024-11-26 01:04:55.374591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.597 [2024-11-26 01:04:55.374628] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:32.597 [2024-11-26 01:04:55.374638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.597 [2024-11-26 01:04:55.374646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:32.597 [2024-11-26 01:04:55.374655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:32.597 [2024-11-26 01:04:55.374663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.597 [2024-11-26 01:04:55.380620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.597 [2024-11-26 01:04:55.380671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:32.597 [2024-11-26 01:04:55.380683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.937 ms 00:19:32.597 [2024-11-26 01:04:55.380698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.597 [2024-11-26 01:04:55.380792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.597 [2024-11-26 01:04:55.380804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:32.597 [2024-11-26 01:04:55.380814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:32.597 [2024-11-26 01:04:55.380821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.597 [2024-11-26 01:04:55.382129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:32.597 [2024-11-26 01:04:55.383552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.117 ms, result 0 00:19:32.597 [2024-11-26 01:04:55.384875] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:32.597 [2024-11-26 01:04:55.392201] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.543  [2024-11-26T01:04:57.405Z] Copying: 14/256 [MB] (14 MBps) [2024-11-26T01:04:58.795Z] Copying: 38/256 [MB] (23 MBps) [2024-11-26T01:04:59.740Z] Copying: 57/256 [MB] (19 MBps) [2024-11-26T01:05:00.686Z] Copying: 69/256 [MB] (12 MBps) [2024-11-26T01:05:01.629Z] Copying: 85/256 [MB] (15 MBps) [2024-11-26T01:05:02.574Z] Copying: 102/256 [MB] (16 MBps) [2024-11-26T01:05:03.518Z] Copying: 115/256 [MB] (13 MBps) [2024-11-26T01:05:04.463Z] Copying: 131/256 [MB] (16 MBps) [2024-11-26T01:05:05.406Z] Copying: 150/256 [MB] (18 MBps) [2024-11-26T01:05:06.793Z] Copying: 170/256 [MB] (20 MBps) [2024-11-26T01:05:07.737Z] Copying: 190/256 [MB] (20 MBps) [2024-11-26T01:05:08.683Z] Copying: 204/256 [MB] (14 MBps) [2024-11-26T01:05:09.627Z] Copying: 216/256 [MB] (11 MBps) [2024-11-26T01:05:10.571Z] Copying: 227/256 [MB] (10 MBps) [2024-11-26T01:05:11.514Z] Copying: 243/256 [MB] (16 MBps) [2024-11-26T01:05:11.777Z] Copying: 254/256 [MB] (10 MBps) [2024-11-26T01:05:11.777Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-26 01:05:11.527678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:48.860 [2024-11-26 01:05:11.529656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.860 [2024-11-26 01:05:11.529716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:48.860 [2024-11-26 01:05:11.529731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:48.860 [2024-11-26 01:05:11.529740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.529762] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:48.861 [2024-11-26 01:05:11.530479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.530518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:48.861 [2024-11-26 01:05:11.530530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:19:48.861 [2024-11-26 01:05:11.530549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.530817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.530869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:48.861 [2024-11-26 01:05:11.530884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:19:48.861 [2024-11-26 01:05:11.530892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.534597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.534624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:48.861 [2024-11-26 01:05:11.534634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.688 ms 00:19:48.861 [2024-11-26 01:05:11.534643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.541781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.541821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:48.861 [2024-11-26 01:05:11.541850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.119 ms 00:19:48.861 [2024-11-26 01:05:11.541859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.545043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.545095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:48.861 [2024-11-26 01:05:11.545105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:19:48.861 [2024-11-26 01:05:11.545112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.549734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.549796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:48.861 [2024-11-26 01:05:11.549807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:19:48.861 [2024-11-26 01:05:11.549814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.549967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.549980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:48.861 [2024-11-26 01:05:11.549997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:48.861 [2024-11-26 01:05:11.550005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.553294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.553344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:48.861 [2024-11-26 01:05:11.553355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.270 ms 00:19:48.861 [2024-11-26 01:05:11.553362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.556318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.556365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:48.861 [2024-11-26 01:05:11.556374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:19:48.861 [2024-11-26 01:05:11.556382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.558829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.558892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:48.861 [2024-11-26 01:05:11.558902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.404 ms 00:19:48.861 [2024-11-26 01:05:11.558908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.561424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.861 [2024-11-26 01:05:11.561472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:48.861 [2024-11-26 01:05:11.561481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:19:48.861 [2024-11-26 01:05:11.561488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.861 [2024-11-26 01:05:11.561529] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:48.861 [2024-11-26 01:05:11.561544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:48.861 [2024-11-26 01:05:11.561894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.561995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:48.862 [2024-11-26 01:05:11.562397] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:48.862 [2024-11-26 01:05:11.562406] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d18485b-f600-4eee-9447-45835e109f8e 00:19:48.862 [2024-11-26 01:05:11.562414] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:48.862 [2024-11-26 01:05:11.562422] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:48.862 [2024-11-26 01:05:11.562430] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:48.862 [2024-11-26 01:05:11.562438] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:48.862 [2024-11-26 01:05:11.562445] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:48.862 [2024-11-26 01:05:11.562457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:48.862 [2024-11-26 01:05:11.562465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:48.862 [2024-11-26 01:05:11.562471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:48.862 [2024-11-26 01:05:11.562479] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:48.862 [2024-11-26 01:05:11.562486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.862 [2024-11-26 01:05:11.562494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:48.862 [2024-11-26 01:05:11.562504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:19:48.862 [2024-11-26 01:05:11.562512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.862 [2024-11-26 01:05:11.564805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.862 [2024-11-26 01:05:11.564859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:48.862 [2024-11-26 01:05:11.564878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:19:48.862 [2024-11-26 01:05:11.564891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.862 [2024-11-26 01:05:11.565001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:48.862 [2024-11-26 01:05:11.565010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:48.862 [2024-11-26 01:05:11.565020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:48.862 [2024-11-26 01:05:11.565027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.862 [2024-11-26 01:05:11.573070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.862 [2024-11-26 01:05:11.573118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:48.862 [2024-11-26 01:05:11.573137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.862 [2024-11-26 01:05:11.573145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.862 [2024-11-26 01:05:11.573234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.862 [2024-11-26 01:05:11.573244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:48.862 [2024-11-26 01:05:11.573254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.862 [2024-11-26 01:05:11.573263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.862 [2024-11-26 01:05:11.573319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.862 [2024-11-26 01:05:11.573329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:48.862 [2024-11-26 01:05:11.573342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.862 [2024-11-26 01:05:11.573353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.862 [2024-11-26 01:05:11.573371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.862 [2024-11-26 01:05:11.573380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:48.862 [2024-11-26 01:05:11.573388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.862 [2024-11-26 01:05:11.573396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.862 [2024-11-26 01:05:11.586533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.862 [2024-11-26 01:05:11.586585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:48.863 [2024-11-26 01:05:11.586596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.863 [2024-11-26 01:05:11.586611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.863 [2024-11-26 01:05:11.596354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.863 [2024-11-26 01:05:11.596403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:48.863 [2024-11-26 01:05:11.596414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.863 [2024-11-26 01:05:11.596422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.863 [2024-11-26 01:05:11.596451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.863 [2024-11-26 01:05:11.596460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:48.863 [2024-11-26 01:05:11.596468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.863 [2024-11-26 01:05:11.596477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.863 [2024-11-26 01:05:11.596515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.863 [2024-11-26 01:05:11.596524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:48.863 [2024-11-26 01:05:11.596534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.863 [2024-11-26 01:05:11.596542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.863 [2024-11-26 01:05:11.596615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.863 [2024-11-26 01:05:11.596624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:48.863 [2024-11-26 01:05:11.596638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.863 [2024-11-26 01:05:11.596646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.863 [2024-11-26 01:05:11.596692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.863 [2024-11-26 01:05:11.596704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:48.863 [2024-11-26 01:05:11.596714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.863 [2024-11-26 01:05:11.596721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.863 [2024-11-26 01:05:11.596762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.863 [2024-11-26 01:05:11.596784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:48.863 [2024-11-26 01:05:11.596793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.863 [2024-11-26 01:05:11.596801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.863 [2024-11-26 01:05:11.596869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:48.863 [2024-11-26 01:05:11.596881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:48.863 [2024-11-26 01:05:11.596894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:48.863 [2024-11-26 01:05:11.596902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:48.863 [2024-11-26 01:05:11.597050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.387 ms, result 0 00:19:49.123 00:19:49.123 00:19:49.123 01:05:11 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:49.123 01:05:11 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:49.696 01:05:12 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:49.696 [2024-11-26 01:05:12.442881] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:19:49.696 [2024-11-26 01:05:12.443031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89741 ] 00:19:49.696 [2024-11-26 01:05:12.579314] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:49.696 [2024-11-26 01:05:12.609299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.957 [2024-11-26 01:05:12.638557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.957 [2024-11-26 01:05:12.753627] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.957 [2024-11-26 01:05:12.753713] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:50.219 [2024-11-26 01:05:12.914970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.915031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:50.219 [2024-11-26 01:05:12.915046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:50.219 [2024-11-26 01:05:12.915055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.917692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.917750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.219 [2024-11-26 01:05:12.917765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:19:50.219 [2024-11-26 01:05:12.917774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.917906] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:50.219 [2024-11-26 01:05:12.918242] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:50.219 [2024-11-26 01:05:12.918269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.918282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.219 [2024-11-26 01:05:12.918292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:19:50.219 [2024-11-26 01:05:12.918307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.920174] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:50.219 [2024-11-26 01:05:12.923818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.923888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:50.219 [2024-11-26 01:05:12.923901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.647 ms 00:19:50.219 [2024-11-26 01:05:12.923910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.924013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.924027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:50.219 [2024-11-26 01:05:12.924037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:50.219 [2024-11-26 01:05:12.924045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.932262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.932306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.219 [2024-11-26 01:05:12.932317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.161 ms 00:19:50.219 [2024-11-26 01:05:12.932329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.932467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.932478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.219 [2024-11-26 01:05:12.932487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:50.219 [2024-11-26 01:05:12.932499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.932530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.932539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:50.219 [2024-11-26 01:05:12.932547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:50.219 [2024-11-26 01:05:12.932554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.932576] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:50.219 [2024-11-26 01:05:12.934681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.934718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.219 [2024-11-26 01:05:12.934733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:19:50.219 [2024-11-26 01:05:12.934749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.934792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.934801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:50.219 [2024-11-26 01:05:12.934813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:50.219 [2024-11-26 01:05:12.934821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.934862] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:50.219 [2024-11-26 01:05:12.934884] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:50.219 [2024-11-26 01:05:12.934925] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:50.219 [2024-11-26 01:05:12.934947] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:50.219 [2024-11-26 01:05:12.935053] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:50.219 [2024-11-26 01:05:12.935065] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:50.219 [2024-11-26 01:05:12.935076] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:50.219 [2024-11-26 01:05:12.935086] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:50.219 [2024-11-26 01:05:12.935095] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:50.219 [2024-11-26 01:05:12.935104] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:50.219 [2024-11-26 01:05:12.935111] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:50.219 [2024-11-26 01:05:12.935122] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:50.219 [2024-11-26 01:05:12.935132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:50.219 [2024-11-26 01:05:12.935140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.935148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:50.219 [2024-11-26 01:05:12.935157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:19:50.219 [2024-11-26 01:05:12.935165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.935253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.219 [2024-11-26 01:05:12.935263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:50.219 [2024-11-26 01:05:12.935270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:50.219 [2024-11-26 01:05:12.935278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.219 [2024-11-26 01:05:12.935380] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:50.219 [2024-11-26 01:05:12.935399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:50.219 [2024-11-26 01:05:12.935409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.219 [2024-11-26 01:05:12.935418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.219 [2024-11-26 01:05:12.935434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:50.219 [2024-11-26 01:05:12.935442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:50.219 [2024-11-26 01:05:12.935453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:50.219 [2024-11-26 01:05:12.935461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:50.219 [2024-11-26 01:05:12.935470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:50.219 [2024-11-26 01:05:12.935478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.219 [2024-11-26 01:05:12.935487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:50.219 [2024-11-26 01:05:12.935495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:50.219 [2024-11-26 01:05:12.935503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:50.219 [2024-11-26 01:05:12.935511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:50.219 [2024-11-26 01:05:12.935519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:50.220 [2024-11-26 01:05:12.935528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:50.220 [2024-11-26 01:05:12.935546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:50.220 [2024-11-26 01:05:12.935554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:50.220 [2024-11-26 01:05:12.935571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.220 [2024-11-26 01:05:12.935594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:50.220 [2024-11-26 01:05:12.935602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.220 [2024-11-26 01:05:12.935618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:50.220 [2024-11-26 01:05:12.935627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.220 [2024-11-26 01:05:12.935643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:50.220 [2024-11-26 01:05:12.935651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:50.220 [2024-11-26 01:05:12.935667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:50.220 [2024-11-26 01:05:12.935675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.220 [2024-11-26 01:05:12.935691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:50.220 [2024-11-26 01:05:12.935699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:50.220 [2024-11-26 01:05:12.935707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:50.220 [2024-11-26 01:05:12.935714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:50.220 [2024-11-26 01:05:12.935724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:50.220 [2024-11-26 01:05:12.935732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:50.220 [2024-11-26 01:05:12.935747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:50.220 [2024-11-26 01:05:12.935754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935762] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:50.220 [2024-11-26 01:05:12.935771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:50.220 [2024-11-26 01:05:12.935780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:50.220 [2024-11-26 01:05:12.935788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:50.220 [2024-11-26 01:05:12.935796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:50.220 [2024-11-26 01:05:12.935808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:50.220 [2024-11-26 01:05:12.935816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:50.220 [2024-11-26 01:05:12.935824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:50.220 [2024-11-26 01:05:12.935832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:50.220 [2024-11-26 01:05:12.935854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:50.220 [2024-11-26 01:05:12.935864] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:50.220 [2024-11-26 01:05:12.935876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.220 [2024-11-26 01:05:12.935885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:50.220 [2024-11-26 01:05:12.935892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:50.220 [2024-11-26 01:05:12.935900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:50.220 [2024-11-26 01:05:12.935907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:50.220 [2024-11-26 01:05:12.935915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:50.220 [2024-11-26 01:05:12.935922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:50.220 [2024-11-26 01:05:12.935930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:50.220 [2024-11-26 01:05:12.935937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:50.220 [2024-11-26 01:05:12.935947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:50.220 [2024-11-26 01:05:12.935958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:50.220 [2024-11-26 01:05:12.935966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:50.220 [2024-11-26 01:05:12.935973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:50.220 [2024-11-26 01:05:12.935980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:50.220 [2024-11-26 01:05:12.935988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:50.220 [2024-11-26 01:05:12.935995] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:50.220 [2024-11-26 01:05:12.936009] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:50.220 [2024-11-26 01:05:12.936017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:50.220 [2024-11-26 01:05:12.936025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:50.220 [2024-11-26 01:05:12.936032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:50.220 [2024-11-26 01:05:12.936039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:50.220 [2024-11-26 01:05:12.936048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:12.936055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:50.220 [2024-11-26 01:05:12.936068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:19:50.220 [2024-11-26 01:05:12.936075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.220 [2024-11-26 01:05:12.949735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:12.949780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.220 [2024-11-26 01:05:12.949792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.609 ms 00:19:50.220 [2024-11-26 01:05:12.949801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.220 [2024-11-26 01:05:12.949953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:12.949965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:50.220 [2024-11-26 01:05:12.949981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:50.220 [2024-11-26 01:05:12.949989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.220 [2024-11-26 01:05:12.981364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:12.981467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.220 [2024-11-26 01:05:12.981503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.343 ms 00:19:50.220 [2024-11-26 01:05:12.981537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.220 [2024-11-26 01:05:12.981751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:12.981799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.220 [2024-11-26 01:05:12.981827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:50.220 [2024-11-26 01:05:12.981903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.220 [2024-11-26 01:05:12.982666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:12.982737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.220 [2024-11-26 01:05:12.982753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:19:50.220 [2024-11-26 01:05:12.982763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.220 [2024-11-26 01:05:12.982952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:12.982965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.220 [2024-11-26 01:05:12.982980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:19:50.220 [2024-11-26 01:05:12.982992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.220 [2024-11-26 01:05:12.991404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:12.991454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.220 [2024-11-26 01:05:12.991464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.386 ms 00:19:50.220 [2024-11-26 01:05:12.991472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.220 [2024-11-26 01:05:12.995377] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:50.220 [2024-11-26 01:05:12.995427] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:50.220 [2024-11-26 01:05:12.995439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:12.995446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:50.220 [2024-11-26 01:05:12.995455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.864 ms 00:19:50.220 [2024-11-26 01:05:12.995463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.220 [2024-11-26 01:05:13.011203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.220 [2024-11-26 01:05:13.011251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:50.221 [2024-11-26 01:05:13.011264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.657 ms 00:19:50.221 [2024-11-26 01:05:13.011273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.014346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.014393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:50.221 [2024-11-26 01:05:13.014404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:19:50.221 [2024-11-26 01:05:13.014411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.017083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.017127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:50.221 [2024-11-26 01:05:13.017137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:19:50.221 [2024-11-26 01:05:13.017145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.017491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.017516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:50.221 [2024-11-26 01:05:13.017526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:50.221 [2024-11-26 01:05:13.017534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.039941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.039996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:50.221 [2024-11-26 01:05:13.040010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.383 ms 00:19:50.221 [2024-11-26 01:05:13.040019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.047960] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:50.221 [2024-11-26 01:05:13.066608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.066657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:50.221 [2024-11-26 01:05:13.066669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.493 ms 00:19:50.221 [2024-11-26 01:05:13.066677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.066771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.066786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:50.221 [2024-11-26 01:05:13.066796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:50.221 [2024-11-26 01:05:13.066805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.066884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.066903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:50.221 [2024-11-26 01:05:13.066912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:50.221 [2024-11-26 01:05:13.066920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.066948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.066957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:50.221 [2024-11-26 01:05:13.066968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:50.221 [2024-11-26 01:05:13.066976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.067010] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:50.221 [2024-11-26 01:05:13.067020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.067029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:50.221 [2024-11-26 01:05:13.067037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:50.221 [2024-11-26 01:05:13.067045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.072791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.072856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:50.221 [2024-11-26 01:05:13.072868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:19:50.221 [2024-11-26 01:05:13.072884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.072977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.221 [2024-11-26 01:05:13.072990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:50.221 [2024-11-26 01:05:13.073000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:50.221 [2024-11-26 01:05:13.073007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.221 [2024-11-26 01:05:13.074004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.221 [2024-11-26 01:05:13.075338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.704 ms, result 0 00:19:50.221 [2024-11-26 01:05:13.076706] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.221 [2024-11-26 01:05:13.083966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:50.797  [2024-11-26T01:05:13.714Z] Copying: 4096/4096 [kB] (average 12 MBps)[2024-11-26 01:05:13.417052] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.797 [2024-11-26 01:05:13.418950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.418998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.797 [2024-11-26 01:05:13.419012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:50.797 [2024-11-26 01:05:13.419020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.419042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:50.797 [2024-11-26 01:05:13.419730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.419766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.797 [2024-11-26 01:05:13.419778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:19:50.797 [2024-11-26 01:05:13.419787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.423241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.423298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.797 [2024-11-26 01:05:13.423309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.426 ms 00:19:50.797 [2024-11-26 01:05:13.423317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.427378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.427410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.797 [2024-11-26 01:05:13.427420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.044 ms 00:19:50.797 [2024-11-26 01:05:13.427429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.434755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.434797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:50.797 [2024-11-26 01:05:13.434822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.296 ms 00:19:50.797 [2024-11-26 01:05:13.434830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.439063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.439112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.797 [2024-11-26 01:05:13.439123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.152 ms 00:19:50.797 [2024-11-26 01:05:13.439131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.444221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.444288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.797 [2024-11-26 01:05:13.444302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.041 ms 00:19:50.797 [2024-11-26 01:05:13.444311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.444431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.444443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.797 [2024-11-26 01:05:13.444460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:50.797 [2024-11-26 01:05:13.444468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.447193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.447249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:50.797 [2024-11-26 01:05:13.447260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:19:50.797 [2024-11-26 01:05:13.447267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.450209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.450268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:50.797 [2024-11-26 01:05:13.450280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.893 ms 00:19:50.797 [2024-11-26 01:05:13.450288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.452105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.452154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.797 [2024-11-26 01:05:13.452165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:19:50.797 [2024-11-26 01:05:13.452173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.454203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.797 [2024-11-26 01:05:13.454266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.797 [2024-11-26 01:05:13.454278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.933 ms 00:19:50.797 [2024-11-26 01:05:13.454287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.797 [2024-11-26 01:05:13.454344] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.797 [2024-11-26 01:05:13.454362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.797 [2024-11-26 01:05:13.454575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.454997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.798 [2024-11-26 01:05:13.455177] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.798 [2024-11-26 01:05:13.455186] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d18485b-f600-4eee-9447-45835e109f8e 00:19:50.798 [2024-11-26 01:05:13.455194] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.798 [2024-11-26 01:05:13.455205] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.798 [2024-11-26 01:05:13.455213] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.798 [2024-11-26 01:05:13.455221] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.798 [2024-11-26 01:05:13.455228] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.798 [2024-11-26 01:05:13.455242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.798 [2024-11-26 01:05:13.455250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.798 [2024-11-26 01:05:13.455257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.798 [2024-11-26 01:05:13.455264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.798 [2024-11-26 01:05:13.455272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.798 [2024-11-26 01:05:13.455280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.798 [2024-11-26 01:05:13.455289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:19:50.798 [2024-11-26 01:05:13.455297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.798 [2024-11-26 01:05:13.457600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.798 [2024-11-26 01:05:13.457640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.798 [2024-11-26 01:05:13.457651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.267 ms 00:19:50.798 [2024-11-26 01:05:13.457670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.798 [2024-11-26 01:05:13.457789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.798 [2024-11-26 01:05:13.457799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.798 [2024-11-26 01:05:13.457808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:19:50.799 [2024-11-26 01:05:13.457815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.465904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.465951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.799 [2024-11-26 01:05:13.465968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.465976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.466066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.466075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.799 [2024-11-26 01:05:13.466084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.466092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.466137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.466153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.799 [2024-11-26 01:05:13.466161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.466172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.466191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.466199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.799 [2024-11-26 01:05:13.466208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.466215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.479462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.479518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.799 [2024-11-26 01:05:13.479529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.479546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.489422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.489477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.799 [2024-11-26 01:05:13.489497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.489505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.489535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.489544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.799 [2024-11-26 01:05:13.489553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.489560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.489609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.489619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.799 [2024-11-26 01:05:13.489627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.489635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.489706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.489716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.799 [2024-11-26 01:05:13.489725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.489733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.489774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.489786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.799 [2024-11-26 01:05:13.489795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.489803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.489861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.489875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.799 [2024-11-26 01:05:13.489883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.489891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.489943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.799 [2024-11-26 01:05:13.489960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.799 [2024-11-26 01:05:13.489969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.799 [2024-11-26 01:05:13.489980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.799 [2024-11-26 01:05:13.490141] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.185 ms, result 0 00:19:50.799 00:19:50.799 00:19:50.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.799 01:05:13 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=89756 00:19:50.799 01:05:13 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 89756 00:19:50.799 01:05:13 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 89756 ']' 00:19:50.799 01:05:13 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:50.799 01:05:13 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.799 01:05:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.799 01:05:13 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.799 01:05:13 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.799 01:05:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:51.060 [2024-11-26 01:05:13.787375] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:19:51.060 [2024-11-26 01:05:13.787743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89756 ] 00:19:51.060 [2024-11-26 01:05:13.924462] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:51.060 [2024-11-26 01:05:13.950981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.321 [2024-11-26 01:05:13.979438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.894 01:05:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.894 01:05:14 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:51.894 01:05:14 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:52.158 [2024-11-26 01:05:14.852968] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.158 [2024-11-26 01:05:14.853043] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:52.158 [2024-11-26 01:05:15.030218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.030274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:52.158 [2024-11-26 01:05:15.030291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.158 [2024-11-26 01:05:15.030300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.032779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.032830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.158 [2024-11-26 01:05:15.032858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:19:52.158 [2024-11-26 01:05:15.032867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.032972] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:52.158 [2024-11-26 01:05:15.033356] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:52.158 [2024-11-26 01:05:15.033399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.033407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.158 [2024-11-26 01:05:15.033425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:19:52.158 [2024-11-26 01:05:15.033433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.035164] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:52.158 [2024-11-26 01:05:15.038914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.038966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:52.158 [2024-11-26 01:05:15.038977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.758 ms 00:19:52.158 [2024-11-26 01:05:15.038987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.039068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.039089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:52.158 [2024-11-26 01:05:15.039098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:52.158 [2024-11-26 01:05:15.039111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.046919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.046961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.158 [2024-11-26 01:05:15.046971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.758 ms 00:19:52.158 [2024-11-26 01:05:15.046982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.047092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.047107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.158 [2024-11-26 01:05:15.047119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:52.158 [2024-11-26 01:05:15.047129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.047156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.047168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:52.158 [2024-11-26 01:05:15.047176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:52.158 [2024-11-26 01:05:15.047185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.047209] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:52.158 [2024-11-26 01:05:15.049238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.049276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.158 [2024-11-26 01:05:15.049288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:19:52.158 [2024-11-26 01:05:15.049296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.049336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.049350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:52.158 [2024-11-26 01:05:15.049360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:52.158 [2024-11-26 01:05:15.049369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.049390] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:52.158 [2024-11-26 01:05:15.049410] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:52.158 [2024-11-26 01:05:15.049456] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:52.158 [2024-11-26 01:05:15.049477] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:52.158 [2024-11-26 01:05:15.049589] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:52.158 [2024-11-26 01:05:15.049601] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:52.158 [2024-11-26 01:05:15.049617] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:52.158 [2024-11-26 01:05:15.049629] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:52.158 [2024-11-26 01:05:15.049643] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:52.158 [2024-11-26 01:05:15.049651] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:52.158 [2024-11-26 01:05:15.049661] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:52.158 [2024-11-26 01:05:15.049671] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:52.158 [2024-11-26 01:05:15.049682] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:52.158 [2024-11-26 01:05:15.049689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.049698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:52.158 [2024-11-26 01:05:15.049706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:19:52.158 [2024-11-26 01:05:15.049716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.049803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.158 [2024-11-26 01:05:15.049815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:52.158 [2024-11-26 01:05:15.049822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:52.158 [2024-11-26 01:05:15.049831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.158 [2024-11-26 01:05:15.049952] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:52.158 [2024-11-26 01:05:15.049964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:52.158 [2024-11-26 01:05:15.049973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.158 [2024-11-26 01:05:15.049985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.158 [2024-11-26 01:05:15.049993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:52.158 [2024-11-26 01:05:15.050002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:52.158 [2024-11-26 01:05:15.050009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:52.158 [2024-11-26 01:05:15.050020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:52.158 [2024-11-26 01:05:15.050033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:52.158 [2024-11-26 01:05:15.050043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.158 [2024-11-26 01:05:15.050049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:52.158 [2024-11-26 01:05:15.050076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:52.158 [2024-11-26 01:05:15.050083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:52.158 [2024-11-26 01:05:15.050092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:52.158 [2024-11-26 01:05:15.050099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:52.158 [2024-11-26 01:05:15.050109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.158 [2024-11-26 01:05:15.050118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:52.158 [2024-11-26 01:05:15.050127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:52.158 [2024-11-26 01:05:15.050135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.158 [2024-11-26 01:05:15.050146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:52.158 [2024-11-26 01:05:15.050154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:52.158 [2024-11-26 01:05:15.050163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.158 [2024-11-26 01:05:15.050170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:52.158 [2024-11-26 01:05:15.050179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:52.158 [2024-11-26 01:05:15.050185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.158 [2024-11-26 01:05:15.050195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:52.158 [2024-11-26 01:05:15.050202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:52.159 [2024-11-26 01:05:15.050210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.159 [2024-11-26 01:05:15.050217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:52.159 [2024-11-26 01:05:15.050225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:52.159 [2024-11-26 01:05:15.050232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:52.159 [2024-11-26 01:05:15.050245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:52.159 [2024-11-26 01:05:15.050253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:52.159 [2024-11-26 01:05:15.050262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.159 [2024-11-26 01:05:15.050269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:52.159 [2024-11-26 01:05:15.050280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:52.159 [2024-11-26 01:05:15.050287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:52.159 [2024-11-26 01:05:15.050296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:52.159 [2024-11-26 01:05:15.050303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:52.159 [2024-11-26 01:05:15.050312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.159 [2024-11-26 01:05:15.050319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:52.159 [2024-11-26 01:05:15.050328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:52.159 [2024-11-26 01:05:15.050335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.159 [2024-11-26 01:05:15.050344] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:52.159 [2024-11-26 01:05:15.050353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:52.159 [2024-11-26 01:05:15.050362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:52.159 [2024-11-26 01:05:15.050370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:52.159 [2024-11-26 01:05:15.050379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:52.159 [2024-11-26 01:05:15.050387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:52.159 [2024-11-26 01:05:15.050397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:52.159 [2024-11-26 01:05:15.050405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:52.159 [2024-11-26 01:05:15.050415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:52.159 [2024-11-26 01:05:15.050423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:52.159 [2024-11-26 01:05:15.050434] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:52.159 [2024-11-26 01:05:15.050445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.159 [2024-11-26 01:05:15.050459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:52.159 [2024-11-26 01:05:15.050468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:52.159 [2024-11-26 01:05:15.050480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:52.159 [2024-11-26 01:05:15.050489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:52.159 [2024-11-26 01:05:15.050500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:52.159 [2024-11-26 01:05:15.050508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:52.159 [2024-11-26 01:05:15.050518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:52.159 [2024-11-26 01:05:15.050529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:52.159 [2024-11-26 01:05:15.050538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:52.159 [2024-11-26 01:05:15.050547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:52.159 [2024-11-26 01:05:15.050559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:52.159 [2024-11-26 01:05:15.050567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:52.159 [2024-11-26 01:05:15.050579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:52.159 [2024-11-26 01:05:15.050588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:52.159 [2024-11-26 01:05:15.050599] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:52.159 [2024-11-26 01:05:15.050610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:52.159 [2024-11-26 01:05:15.050620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:52.159 [2024-11-26 01:05:15.050628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:52.159 [2024-11-26 01:05:15.050637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:52.159 [2024-11-26 01:05:15.050643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:52.159 [2024-11-26 01:05:15.050653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.159 [2024-11-26 01:05:15.050660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:52.159 [2024-11-26 01:05:15.050671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:19:52.159 [2024-11-26 01:05:15.050682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.159 [2024-11-26 01:05:15.064385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.159 [2024-11-26 01:05:15.064430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.159 [2024-11-26 01:05:15.064445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.643 ms 00:19:52.159 [2024-11-26 01:05:15.064458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.159 [2024-11-26 01:05:15.064589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.159 [2024-11-26 01:05:15.064604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:52.159 [2024-11-26 01:05:15.064615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:52.159 [2024-11-26 01:05:15.064627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.421 [2024-11-26 01:05:15.077240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.421 [2024-11-26 01:05:15.077283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.421 [2024-11-26 01:05:15.077298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.587 ms 00:19:52.421 [2024-11-26 01:05:15.077307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.421 [2024-11-26 01:05:15.077377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.421 [2024-11-26 01:05:15.077387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.421 [2024-11-26 01:05:15.077397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:52.421 [2024-11-26 01:05:15.077405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.421 [2024-11-26 01:05:15.077962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.421 [2024-11-26 01:05:15.077991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.421 [2024-11-26 01:05:15.078005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:19:52.421 [2024-11-26 01:05:15.078016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.421 [2024-11-26 01:05:15.078185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.078196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.422 [2024-11-26 01:05:15.078207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:52.422 [2024-11-26 01:05:15.078215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.086326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.086379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.422 [2024-11-26 01:05:15.086391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.082 ms 00:19:52.422 [2024-11-26 01:05:15.086399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.090250] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:52.422 [2024-11-26 01:05:15.090305] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:52.422 [2024-11-26 01:05:15.090319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.090328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:52.422 [2024-11-26 01:05:15.090339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.811 ms 00:19:52.422 [2024-11-26 01:05:15.090347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.106073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.106121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:52.422 [2024-11-26 01:05:15.106138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.649 ms 00:19:52.422 [2024-11-26 01:05:15.106147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.108898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.108940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:52.422 [2024-11-26 01:05:15.108952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.653 ms 00:19:52.422 [2024-11-26 01:05:15.108959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.111564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.111609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:52.422 [2024-11-26 01:05:15.111622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:19:52.422 [2024-11-26 01:05:15.111629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.112005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.112019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:52.422 [2024-11-26 01:05:15.112031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:19:52.422 [2024-11-26 01:05:15.112040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.145257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.145326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:52.422 [2024-11-26 01:05:15.145347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.187 ms 00:19:52.422 [2024-11-26 01:05:15.145359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.153512] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:52.422 [2024-11-26 01:05:15.171677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.171731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:52.422 [2024-11-26 01:05:15.171744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.213 ms 00:19:52.422 [2024-11-26 01:05:15.171754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.171865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.171880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:52.422 [2024-11-26 01:05:15.171895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:52.422 [2024-11-26 01:05:15.171906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.171968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.171980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:52.422 [2024-11-26 01:05:15.171989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:52.422 [2024-11-26 01:05:15.172000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.172030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.172048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:52.422 [2024-11-26 01:05:15.172057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.422 [2024-11-26 01:05:15.172067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.172100] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:52.422 [2024-11-26 01:05:15.172113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.172121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:52.422 [2024-11-26 01:05:15.172130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:52.422 [2024-11-26 01:05:15.172138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.178128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.178175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:52.422 [2024-11-26 01:05:15.178191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.961 ms 00:19:52.422 [2024-11-26 01:05:15.178200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.178297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.422 [2024-11-26 01:05:15.178307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:52.422 [2024-11-26 01:05:15.178319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:52.422 [2024-11-26 01:05:15.178327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.422 [2024-11-26 01:05:15.179374] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:52.422 [2024-11-26 01:05:15.180674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.838 ms, result 0 00:19:52.422 [2024-11-26 01:05:15.182669] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.422 Some configs were skipped because the RPC state that can call them passed over. 00:19:52.422 01:05:15 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:52.684 [2024-11-26 01:05:15.414265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.684 [2024-11-26 01:05:15.414328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:52.684 [2024-11-26 01:05:15.414342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.974 ms 00:19:52.684 [2024-11-26 01:05:15.414353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.684 [2024-11-26 01:05:15.414390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 5.103 ms, result 0 00:19:52.684 true 00:19:52.684 01:05:15 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:52.947 [2024-11-26 01:05:15.628402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.947 [2024-11-26 01:05:15.628452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:52.947 [2024-11-26 01:05:15.628466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.821 ms 00:19:52.947 [2024-11-26 01:05:15.628474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.947 [2024-11-26 01:05:15.628516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.933 ms, result 0 00:19:52.947 true 00:19:52.948 01:05:15 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 89756 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89756 ']' 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89756 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89756 00:19:52.948 killing process with pid 89756 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89756' 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 89756 00:19:52.948 01:05:15 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 89756 00:19:52.948 [2024-11-26 01:05:15.816299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.816373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:52.948 [2024-11-26 01:05:15.816388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:52.948 [2024-11-26 01:05:15.816402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.816426] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:52.948 [2024-11-26 01:05:15.817190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.817243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:52.948 [2024-11-26 01:05:15.817258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:19:52.948 [2024-11-26 01:05:15.817267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.817572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.817584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:52.948 [2024-11-26 01:05:15.817595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:52.948 [2024-11-26 01:05:15.817603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.822252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.822294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:52.948 [2024-11-26 01:05:15.822307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.624 ms 00:19:52.948 [2024-11-26 01:05:15.822321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.829362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.829404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:52.948 [2024-11-26 01:05:15.829420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.993 ms 00:19:52.948 [2024-11-26 01:05:15.829428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.832091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.832138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:52.948 [2024-11-26 01:05:15.832149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.577 ms 00:19:52.948 [2024-11-26 01:05:15.832157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.837399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.837446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:52.948 [2024-11-26 01:05:15.837462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.188 ms 00:19:52.948 [2024-11-26 01:05:15.837470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.837622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.837634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:52.948 [2024-11-26 01:05:15.837645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:52.948 [2024-11-26 01:05:15.837653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.840799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.840855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:52.948 [2024-11-26 01:05:15.840871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.117 ms 00:19:52.948 [2024-11-26 01:05:15.840878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.843613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.843657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:52.948 [2024-11-26 01:05:15.843670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.681 ms 00:19:52.948 [2024-11-26 01:05:15.843677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.845717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.845761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:52.948 [2024-11-26 01:05:15.845773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:19:52.948 [2024-11-26 01:05:15.845780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.848189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.948 [2024-11-26 01:05:15.848233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:52.948 [2024-11-26 01:05:15.848244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.303 ms 00:19:52.948 [2024-11-26 01:05:15.848251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.948 [2024-11-26 01:05:15.848293] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:52.948 [2024-11-26 01:05:15.848307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:52.948 [2024-11-26 01:05:15.848593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.848998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:52.949 [2024-11-26 01:05:15.849228] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:52.949 [2024-11-26 01:05:15.849238] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d18485b-f600-4eee-9447-45835e109f8e 00:19:52.949 [2024-11-26 01:05:15.849250] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:52.949 [2024-11-26 01:05:15.849260] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:52.949 [2024-11-26 01:05:15.849267] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:52.949 [2024-11-26 01:05:15.849281] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:52.949 [2024-11-26 01:05:15.849292] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:52.949 [2024-11-26 01:05:15.849301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:52.949 [2024-11-26 01:05:15.849309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:52.949 [2024-11-26 01:05:15.849319] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:52.949 [2024-11-26 01:05:15.849325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:52.949 [2024-11-26 01:05:15.849335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.949 [2024-11-26 01:05:15.849344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:52.949 [2024-11-26 01:05:15.849359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:19:52.949 [2024-11-26 01:05:15.849367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.949 [2024-11-26 01:05:15.851596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.949 [2024-11-26 01:05:15.851630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:52.949 [2024-11-26 01:05:15.851646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:19:52.949 [2024-11-26 01:05:15.851655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.949 [2024-11-26 01:05:15.851800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.949 [2024-11-26 01:05:15.851810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:52.949 [2024-11-26 01:05:15.851823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:52.949 [2024-11-26 01:05:15.851834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.949 [2024-11-26 01:05:15.859820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.949 [2024-11-26 01:05:15.859877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.950 [2024-11-26 01:05:15.859891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.950 [2024-11-26 01:05:15.859900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.950 [2024-11-26 01:05:15.859977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.950 [2024-11-26 01:05:15.859986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.950 [2024-11-26 01:05:15.860000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.950 [2024-11-26 01:05:15.860013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.950 [2024-11-26 01:05:15.860066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.950 [2024-11-26 01:05:15.860077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.950 [2024-11-26 01:05:15.860087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.950 [2024-11-26 01:05:15.860095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.950 [2024-11-26 01:05:15.860117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.950 [2024-11-26 01:05:15.860126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.950 [2024-11-26 01:05:15.860136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.950 [2024-11-26 01:05:15.860143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.212 [2024-11-26 01:05:15.874526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.212 [2024-11-26 01:05:15.874583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.212 [2024-11-26 01:05:15.874596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.212 [2024-11-26 01:05:15.874605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.212 [2024-11-26 01:05:15.885429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.212 [2024-11-26 01:05:15.885482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.212 [2024-11-26 01:05:15.885500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.212 [2024-11-26 01:05:15.885509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.212 [2024-11-26 01:05:15.885582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.212 [2024-11-26 01:05:15.885593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.212 [2024-11-26 01:05:15.885603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.212 [2024-11-26 01:05:15.885612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.212 [2024-11-26 01:05:15.885649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.212 [2024-11-26 01:05:15.885658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.212 [2024-11-26 01:05:15.885669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.212 [2024-11-26 01:05:15.885676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.212 [2024-11-26 01:05:15.885754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.212 [2024-11-26 01:05:15.885768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.212 [2024-11-26 01:05:15.885779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.212 [2024-11-26 01:05:15.885787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.212 [2024-11-26 01:05:15.885823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.212 [2024-11-26 01:05:15.885833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.212 [2024-11-26 01:05:15.885882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.212 [2024-11-26 01:05:15.885891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.212 [2024-11-26 01:05:15.885944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.212 [2024-11-26 01:05:15.885959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.212 [2024-11-26 01:05:15.885970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.212 [2024-11-26 01:05:15.885983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.212 [2024-11-26 01:05:15.886036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.212 [2024-11-26 01:05:15.886047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.212 [2024-11-26 01:05:15.886088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.212 [2024-11-26 01:05:15.886097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.212 [2024-11-26 01:05:15.886256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.921 ms, result 0 00:19:53.474 01:05:16 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:53.474 [2024-11-26 01:05:16.199874] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:19:53.474 [2024-11-26 01:05:16.200022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89800 ] 00:19:53.474 [2024-11-26 01:05:16.341646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:53.474 [2024-11-26 01:05:16.367971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.736 [2024-11-26 01:05:16.396240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.736 [2024-11-26 01:05:16.509901] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:53.736 [2024-11-26 01:05:16.509983] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:54.000 [2024-11-26 01:05:16.671531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.671597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:54.000 [2024-11-26 01:05:16.671612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:54.000 [2024-11-26 01:05:16.671621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.674286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.674336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.000 [2024-11-26 01:05:16.674349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:19:54.000 [2024-11-26 01:05:16.674358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.674463] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:54.000 [2024-11-26 01:05:16.674832] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:54.000 [2024-11-26 01:05:16.674895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.674903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.000 [2024-11-26 01:05:16.674913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:19:54.000 [2024-11-26 01:05:16.674924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.676630] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:54.000 [2024-11-26 01:05:16.680415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.680467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:54.000 [2024-11-26 01:05:16.680478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.787 ms 00:19:54.000 [2024-11-26 01:05:16.680486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.680577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.680588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:54.000 [2024-11-26 01:05:16.680598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:54.000 [2024-11-26 01:05:16.680606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.688854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.688893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.000 [2024-11-26 01:05:16.688908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.186 ms 00:19:54.000 [2024-11-26 01:05:16.688918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.689049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.689061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.000 [2024-11-26 01:05:16.689071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:54.000 [2024-11-26 01:05:16.689081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.689108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.689121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:54.000 [2024-11-26 01:05:16.689129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:54.000 [2024-11-26 01:05:16.689141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.689163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:54.000 [2024-11-26 01:05:16.691208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.691250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.000 [2024-11-26 01:05:16.691262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:19:54.000 [2024-11-26 01:05:16.691272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.691314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.691323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:54.000 [2024-11-26 01:05:16.691331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:54.000 [2024-11-26 01:05:16.691339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.691357] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:54.000 [2024-11-26 01:05:16.691377] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:54.000 [2024-11-26 01:05:16.691414] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:54.000 [2024-11-26 01:05:16.691434] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:54.000 [2024-11-26 01:05:16.691540] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:54.000 [2024-11-26 01:05:16.691550] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:54.000 [2024-11-26 01:05:16.691561] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:54.000 [2024-11-26 01:05:16.691577] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:54.000 [2024-11-26 01:05:16.691586] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:54.000 [2024-11-26 01:05:16.691598] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:54.000 [2024-11-26 01:05:16.691606] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:54.000 [2024-11-26 01:05:16.691616] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:54.000 [2024-11-26 01:05:16.691629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:54.000 [2024-11-26 01:05:16.691637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.691645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:54.000 [2024-11-26 01:05:16.691653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:54.000 [2024-11-26 01:05:16.691660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.691748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.000 [2024-11-26 01:05:16.691761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:54.000 [2024-11-26 01:05:16.691769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:54.000 [2024-11-26 01:05:16.691776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.000 [2024-11-26 01:05:16.691907] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:54.000 [2024-11-26 01:05:16.691925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:54.000 [2024-11-26 01:05:16.691935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.000 [2024-11-26 01:05:16.691944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.000 [2024-11-26 01:05:16.691962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:54.000 [2024-11-26 01:05:16.691970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:54.000 [2024-11-26 01:05:16.691981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:54.000 [2024-11-26 01:05:16.691989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:54.000 [2024-11-26 01:05:16.691997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:54.000 [2024-11-26 01:05:16.692005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.000 [2024-11-26 01:05:16.692013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:54.000 [2024-11-26 01:05:16.692020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:54.000 [2024-11-26 01:05:16.692030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:54.000 [2024-11-26 01:05:16.692038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:54.000 [2024-11-26 01:05:16.692047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:54.000 [2024-11-26 01:05:16.692055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.000 [2024-11-26 01:05:16.692064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:54.000 [2024-11-26 01:05:16.692072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:54.000 [2024-11-26 01:05:16.692080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.000 [2024-11-26 01:05:16.692088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:54.000 [2024-11-26 01:05:16.692096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:54.001 [2024-11-26 01:05:16.692103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.001 [2024-11-26 01:05:16.692115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:54.001 [2024-11-26 01:05:16.692124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:54.001 [2024-11-26 01:05:16.692133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.001 [2024-11-26 01:05:16.692140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:54.001 [2024-11-26 01:05:16.692148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:54.001 [2024-11-26 01:05:16.692156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.001 [2024-11-26 01:05:16.692165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:54.001 [2024-11-26 01:05:16.692172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:54.001 [2024-11-26 01:05:16.692180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:54.001 [2024-11-26 01:05:16.692188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:54.001 [2024-11-26 01:05:16.692196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:54.001 [2024-11-26 01:05:16.692204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.001 [2024-11-26 01:05:16.692212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:54.001 [2024-11-26 01:05:16.692219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:54.001 [2024-11-26 01:05:16.692226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:54.001 [2024-11-26 01:05:16.692234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:54.001 [2024-11-26 01:05:16.692245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:54.001 [2024-11-26 01:05:16.692252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.001 [2024-11-26 01:05:16.692260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:54.001 [2024-11-26 01:05:16.692267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:54.001 [2024-11-26 01:05:16.692275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.001 [2024-11-26 01:05:16.692282] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:54.001 [2024-11-26 01:05:16.692292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:54.001 [2024-11-26 01:05:16.692301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:54.001 [2024-11-26 01:05:16.692316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:54.001 [2024-11-26 01:05:16.692325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:54.001 [2024-11-26 01:05:16.692333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:54.001 [2024-11-26 01:05:16.692341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:54.001 [2024-11-26 01:05:16.692349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:54.001 [2024-11-26 01:05:16.692357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:54.001 [2024-11-26 01:05:16.692364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:54.001 [2024-11-26 01:05:16.692372] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:54.001 [2024-11-26 01:05:16.692384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.001 [2024-11-26 01:05:16.692395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:54.001 [2024-11-26 01:05:16.692403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:54.001 [2024-11-26 01:05:16.692410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:54.001 [2024-11-26 01:05:16.692417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:54.001 [2024-11-26 01:05:16.692424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:54.001 [2024-11-26 01:05:16.692431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:54.001 [2024-11-26 01:05:16.692438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:54.001 [2024-11-26 01:05:16.692446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:54.001 [2024-11-26 01:05:16.692455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:54.001 [2024-11-26 01:05:16.692462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:54.001 [2024-11-26 01:05:16.692468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:54.001 [2024-11-26 01:05:16.692475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:54.001 [2024-11-26 01:05:16.692481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:54.001 [2024-11-26 01:05:16.692488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:54.001 [2024-11-26 01:05:16.692495] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:54.001 [2024-11-26 01:05:16.692508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:54.001 [2024-11-26 01:05:16.692516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:54.001 [2024-11-26 01:05:16.692523] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:54.001 [2024-11-26 01:05:16.692529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:54.001 [2024-11-26 01:05:16.692536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:54.001 [2024-11-26 01:05:16.692544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.692564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:54.001 [2024-11-26 01:05:16.692572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:19:54.001 [2024-11-26 01:05:16.692580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.707202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.707249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.001 [2024-11-26 01:05:16.707262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.569 ms 00:19:54.001 [2024-11-26 01:05:16.707272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.707411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.707430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:54.001 [2024-11-26 01:05:16.707441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:54.001 [2024-11-26 01:05:16.707454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.727897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.727950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.001 [2024-11-26 01:05:16.727963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.419 ms 00:19:54.001 [2024-11-26 01:05:16.727977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.728069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.728082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.001 [2024-11-26 01:05:16.728093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:54.001 [2024-11-26 01:05:16.728102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.728596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.728635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.001 [2024-11-26 01:05:16.728655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:19:54.001 [2024-11-26 01:05:16.728664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.728823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.728833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.001 [2024-11-26 01:05:16.728861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:54.001 [2024-11-26 01:05:16.728869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.737221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.737272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.001 [2024-11-26 01:05:16.737284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.327 ms 00:19:54.001 [2024-11-26 01:05:16.737293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.741174] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:54.001 [2024-11-26 01:05:16.741225] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:54.001 [2024-11-26 01:05:16.741240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.741250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:54.001 [2024-11-26 01:05:16.741259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.837 ms 00:19:54.001 [2024-11-26 01:05:16.741267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.756943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.756987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:54.001 [2024-11-26 01:05:16.757000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.610 ms 00:19:54.001 [2024-11-26 01:05:16.757009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.001 [2024-11-26 01:05:16.759484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.001 [2024-11-26 01:05:16.759532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:54.001 [2024-11-26 01:05:16.759542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.389 ms 00:19:54.002 [2024-11-26 01:05:16.759551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.761492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.761538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:54.002 [2024-11-26 01:05:16.761548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.888 ms 00:19:54.002 [2024-11-26 01:05:16.761556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.761936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.761967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:54.002 [2024-11-26 01:05:16.761977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:19:54.002 [2024-11-26 01:05:16.761986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.785924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.785985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:54.002 [2024-11-26 01:05:16.785999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.906 ms 00:19:54.002 [2024-11-26 01:05:16.786008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.794304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:54.002 [2024-11-26 01:05:16.813458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.813518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:54.002 [2024-11-26 01:05:16.813532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.336 ms 00:19:54.002 [2024-11-26 01:05:16.813541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.813643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.813658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:54.002 [2024-11-26 01:05:16.813668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:54.002 [2024-11-26 01:05:16.813678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.813733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.813743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:54.002 [2024-11-26 01:05:16.813756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:54.002 [2024-11-26 01:05:16.813764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.813789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.813798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:54.002 [2024-11-26 01:05:16.813809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:54.002 [2024-11-26 01:05:16.813817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.813874] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:54.002 [2024-11-26 01:05:16.813886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.813893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:54.002 [2024-11-26 01:05:16.813902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:54.002 [2024-11-26 01:05:16.813914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.819904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.819952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:54.002 [2024-11-26 01:05:16.819972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.965 ms 00:19:54.002 [2024-11-26 01:05:16.819984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.820077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.002 [2024-11-26 01:05:16.820089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:54.002 [2024-11-26 01:05:16.820103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:54.002 [2024-11-26 01:05:16.820114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.002 [2024-11-26 01:05:16.821285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:54.002 [2024-11-26 01:05:16.822653] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 149.436 ms, result 0 00:19:54.002 [2024-11-26 01:05:16.824033] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:54.002 [2024-11-26 01:05:16.831275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.391  [2024-11-26T01:05:18.881Z] Copying: 14/256 [MB] (14 MBps) [2024-11-26T01:05:20.268Z] Copying: 29/256 [MB] (15 MBps) [2024-11-26T01:05:21.271Z] Copying: 49/256 [MB] (19 MBps) [2024-11-26T01:05:22.217Z] Copying: 66/256 [MB] (17 MBps) [2024-11-26T01:05:23.163Z] Copying: 80/256 [MB] (13 MBps) [2024-11-26T01:05:24.109Z] Copying: 97/256 [MB] (16 MBps) [2024-11-26T01:05:25.052Z] Copying: 114/256 [MB] (17 MBps) [2024-11-26T01:05:25.992Z] Copying: 124/256 [MB] (10 MBps) [2024-11-26T01:05:26.937Z] Copying: 161/256 [MB] (37 MBps) [2024-11-26T01:05:27.881Z] Copying: 171/256 [MB] (10 MBps) [2024-11-26T01:05:29.266Z] Copying: 182/256 [MB] (10 MBps) [2024-11-26T01:05:30.208Z] Copying: 198/256 [MB] (15 MBps) [2024-11-26T01:05:31.182Z] Copying: 214/256 [MB] (16 MBps) [2024-11-26T01:05:32.128Z] Copying: 228/256 [MB] (13 MBps) [2024-11-26T01:05:32.390Z] Copying: 247/256 [MB] (19 MBps) [2024-11-26T01:05:32.962Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-26 01:05:32.694772] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.045 [2024-11-26 01:05:32.697391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.045 [2024-11-26 01:05:32.697449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:10.045 [2024-11-26 01:05:32.697466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:10.045 [2024-11-26 01:05:32.697479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.045 [2024-11-26 01:05:32.697505] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:10.045 [2024-11-26 01:05:32.698483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.045 [2024-11-26 01:05:32.698543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:10.045 [2024-11-26 01:05:32.698557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.959 ms 00:20:10.045 [2024-11-26 01:05:32.698570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.045 [2024-11-26 01:05:32.698970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.699001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:10.046 [2024-11-26 01:05:32.699016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:10.046 [2024-11-26 01:05:32.699026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.702926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.702957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:10.046 [2024-11-26 01:05:32.702968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.877 ms 00:20:10.046 [2024-11-26 01:05:32.702976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.709887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.709930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:10.046 [2024-11-26 01:05:32.709950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.890 ms 00:20:10.046 [2024-11-26 01:05:32.709958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.713420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.713475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:10.046 [2024-11-26 01:05:32.713486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.383 ms 00:20:10.046 [2024-11-26 01:05:32.713495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.719109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.719179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:10.046 [2024-11-26 01:05:32.719196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.561 ms 00:20:10.046 [2024-11-26 01:05:32.719207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.719358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.719374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:10.046 [2024-11-26 01:05:32.719394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:10.046 [2024-11-26 01:05:32.719404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.722252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.722302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:10.046 [2024-11-26 01:05:32.722312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:20:10.046 [2024-11-26 01:05:32.722321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.724733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.724778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:10.046 [2024-11-26 01:05:32.724789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.363 ms 00:20:10.046 [2024-11-26 01:05:32.724797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.727308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.727355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:10.046 [2024-11-26 01:05:32.727366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.469 ms 00:20:10.046 [2024-11-26 01:05:32.727373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.729751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.046 [2024-11-26 01:05:32.729797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:10.046 [2024-11-26 01:05:32.729807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.297 ms 00:20:10.046 [2024-11-26 01:05:32.729815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.046 [2024-11-26 01:05:32.729870] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:10.046 [2024-11-26 01:05:32.729888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.729899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.729932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.730992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:10.046 [2024-11-26 01:05:32.731169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:10.047 [2024-11-26 01:05:32.731556] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:10.047 [2024-11-26 01:05:32.731568] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0d18485b-f600-4eee-9447-45835e109f8e 00:20:10.047 [2024-11-26 01:05:32.731576] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:10.047 [2024-11-26 01:05:32.731587] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:10.047 [2024-11-26 01:05:32.731594] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:10.047 [2024-11-26 01:05:32.731604] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:10.047 [2024-11-26 01:05:32.731613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:10.047 [2024-11-26 01:05:32.731628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:10.047 [2024-11-26 01:05:32.731638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:10.047 [2024-11-26 01:05:32.731644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:10.047 [2024-11-26 01:05:32.731651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:10.047 [2024-11-26 01:05:32.731658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.047 [2024-11-26 01:05:32.731668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:10.047 [2024-11-26 01:05:32.731678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:20:10.047 [2024-11-26 01:05:32.731686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.735002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.047 [2024-11-26 01:05:32.735047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:10.047 [2024-11-26 01:05:32.735058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.190 ms 00:20:10.047 [2024-11-26 01:05:32.735070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.735220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.047 [2024-11-26 01:05:32.735232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:10.047 [2024-11-26 01:05:32.735240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:20:10.047 [2024-11-26 01:05:32.735248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.746567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.047 [2024-11-26 01:05:32.746616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.047 [2024-11-26 01:05:32.746639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.047 [2024-11-26 01:05:32.746647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.746735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.047 [2024-11-26 01:05:32.746746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.047 [2024-11-26 01:05:32.746756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.047 [2024-11-26 01:05:32.746764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.746824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.047 [2024-11-26 01:05:32.746837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.047 [2024-11-26 01:05:32.746866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.047 [2024-11-26 01:05:32.746879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.746899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.047 [2024-11-26 01:05:32.746908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.047 [2024-11-26 01:05:32.746917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.047 [2024-11-26 01:05:32.746927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.767399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.047 [2024-11-26 01:05:32.767459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.047 [2024-11-26 01:05:32.767472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.047 [2024-11-26 01:05:32.767485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.783454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.047 [2024-11-26 01:05:32.783510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.047 [2024-11-26 01:05:32.783523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.047 [2024-11-26 01:05:32.783533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.783589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.047 [2024-11-26 01:05:32.783600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.047 [2024-11-26 01:05:32.783610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.047 [2024-11-26 01:05:32.783619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.783664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.047 [2024-11-26 01:05:32.783675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.047 [2024-11-26 01:05:32.783686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.047 [2024-11-26 01:05:32.783695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.047 [2024-11-26 01:05:32.783782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.047 [2024-11-26 01:05:32.783796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.048 [2024-11-26 01:05:32.783806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.048 [2024-11-26 01:05:32.783816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.048 [2024-11-26 01:05:32.783898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.048 [2024-11-26 01:05:32.783913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:10.048 [2024-11-26 01:05:32.783922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.048 [2024-11-26 01:05:32.783931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.048 [2024-11-26 01:05:32.783995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.048 [2024-11-26 01:05:32.784007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.048 [2024-11-26 01:05:32.784019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.048 [2024-11-26 01:05:32.784030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.048 [2024-11-26 01:05:32.784098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.048 [2024-11-26 01:05:32.784111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.048 [2024-11-26 01:05:32.784123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.048 [2024-11-26 01:05:32.784133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.048 [2024-11-26 01:05:32.784330] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.901 ms, result 0 00:20:10.307 00:20:10.307 00:20:10.307 01:05:33 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:10.875 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:10.875 01:05:33 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:10.875 01:05:33 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:10.875 01:05:33 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:10.875 01:05:33 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:10.875 01:05:33 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:10.875 01:05:33 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:10.875 01:05:33 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 89756 00:20:10.875 01:05:33 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 89756 ']' 00:20:10.875 Process with pid 89756 is not found 00:20:10.875 01:05:33 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 89756 00:20:10.875 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89756) - No such process 00:20:10.875 01:05:33 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 89756 is not found' 00:20:10.875 00:20:10.875 real 1m11.003s 00:20:10.875 user 1m33.161s 00:20:10.875 sys 0m5.772s 00:20:10.875 01:05:33 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:10.875 01:05:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:10.875 ************************************ 00:20:10.875 END TEST ftl_trim 00:20:10.875 ************************************ 00:20:10.875 01:05:33 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:10.875 01:05:33 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:10.875 01:05:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:10.875 01:05:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:10.875 ************************************ 00:20:10.875 START TEST ftl_restore 00:20:10.875 ************************************ 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:10.875 * Looking for test storage... 00:20:10.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:10.875 01:05:33 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:10.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.875 --rc genhtml_branch_coverage=1 00:20:10.875 --rc genhtml_function_coverage=1 00:20:10.875 --rc genhtml_legend=1 00:20:10.875 --rc geninfo_all_blocks=1 00:20:10.875 --rc geninfo_unexecuted_blocks=1 00:20:10.875 00:20:10.875 ' 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:10.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.875 --rc genhtml_branch_coverage=1 00:20:10.875 --rc genhtml_function_coverage=1 00:20:10.875 --rc genhtml_legend=1 00:20:10.875 --rc geninfo_all_blocks=1 00:20:10.875 --rc geninfo_unexecuted_blocks=1 00:20:10.875 00:20:10.875 ' 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:10.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.875 --rc genhtml_branch_coverage=1 00:20:10.875 --rc genhtml_function_coverage=1 00:20:10.875 --rc genhtml_legend=1 00:20:10.875 --rc geninfo_all_blocks=1 00:20:10.875 --rc geninfo_unexecuted_blocks=1 00:20:10.875 00:20:10.875 ' 00:20:10.875 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:10.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:10.875 --rc genhtml_branch_coverage=1 00:20:10.875 --rc genhtml_function_coverage=1 00:20:10.875 --rc genhtml_legend=1 00:20:10.875 --rc geninfo_all_blocks=1 00:20:10.875 --rc geninfo_unexecuted_blocks=1 00:20:10.875 00:20:10.875 ' 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:10.875 01:05:33 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.JcoCTof3xp 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90047 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90047 00:20:10.876 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 90047 ']' 00:20:10.876 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.876 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:10.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.876 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.876 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:10.876 01:05:33 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:10.876 01:05:33 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:11.136 [2024-11-26 01:05:33.867033] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:20:11.136 [2024-11-26 01:05:33.867157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90047 ] 00:20:11.136 [2024-11-26 01:05:34.001202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:11.136 [2024-11-26 01:05:34.029322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.396 [2024-11-26 01:05:34.067366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.968 01:05:34 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:11.969 01:05:34 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:11.969 01:05:34 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:11.969 01:05:34 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:11.969 01:05:34 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:11.969 01:05:34 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:11.969 01:05:34 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:11.969 01:05:34 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:12.230 01:05:35 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:12.230 01:05:35 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:12.230 01:05:35 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:12.230 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:12.230 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:12.230 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:12.230 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:12.230 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:12.491 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:12.491 { 00:20:12.491 "name": "nvme0n1", 00:20:12.491 "aliases": [ 00:20:12.491 "f422d701-dbe6-4c6b-97e7-4b1a03944c6e" 00:20:12.491 ], 00:20:12.491 "product_name": "NVMe disk", 00:20:12.491 "block_size": 4096, 00:20:12.491 "num_blocks": 1310720, 00:20:12.491 "uuid": "f422d701-dbe6-4c6b-97e7-4b1a03944c6e", 00:20:12.491 "numa_id": -1, 00:20:12.491 "assigned_rate_limits": { 00:20:12.491 "rw_ios_per_sec": 0, 00:20:12.491 "rw_mbytes_per_sec": 0, 00:20:12.491 "r_mbytes_per_sec": 0, 00:20:12.491 "w_mbytes_per_sec": 0 00:20:12.491 }, 00:20:12.491 "claimed": true, 00:20:12.491 "claim_type": "read_many_write_one", 00:20:12.491 "zoned": false, 00:20:12.491 "supported_io_types": { 00:20:12.491 "read": true, 00:20:12.491 "write": true, 00:20:12.491 "unmap": true, 00:20:12.491 "flush": true, 00:20:12.491 "reset": true, 00:20:12.491 "nvme_admin": true, 00:20:12.491 "nvme_io": true, 00:20:12.491 "nvme_io_md": false, 00:20:12.491 "write_zeroes": true, 00:20:12.491 "zcopy": false, 00:20:12.491 "get_zone_info": false, 00:20:12.491 "zone_management": false, 00:20:12.491 "zone_append": false, 00:20:12.491 "compare": true, 00:20:12.491 "compare_and_write": false, 00:20:12.491 "abort": true, 00:20:12.491 "seek_hole": false, 00:20:12.491 "seek_data": false, 00:20:12.491 "copy": true, 00:20:12.491 "nvme_iov_md": false 00:20:12.491 }, 00:20:12.491 "driver_specific": { 00:20:12.491 "nvme": [ 00:20:12.491 { 00:20:12.491 "pci_address": "0000:00:11.0", 00:20:12.491 "trid": { 00:20:12.491 "trtype": "PCIe", 00:20:12.491 "traddr": "0000:00:11.0" 00:20:12.491 }, 00:20:12.491 "ctrlr_data": { 00:20:12.491 "cntlid": 0, 00:20:12.491 "vendor_id": "0x1b36", 00:20:12.491 "model_number": "QEMU NVMe Ctrl", 00:20:12.491 "serial_number": "12341", 00:20:12.491 "firmware_revision": "8.0.0", 00:20:12.491 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:12.491 "oacs": { 00:20:12.491 "security": 0, 00:20:12.491 "format": 1, 00:20:12.491 "firmware": 0, 00:20:12.491 "ns_manage": 1 00:20:12.491 }, 00:20:12.491 "multi_ctrlr": false, 00:20:12.491 "ana_reporting": false 00:20:12.491 }, 00:20:12.491 "vs": { 00:20:12.491 "nvme_version": "1.4" 00:20:12.491 }, 00:20:12.491 "ns_data": { 00:20:12.491 "id": 1, 00:20:12.491 "can_share": false 00:20:12.491 } 00:20:12.491 } 00:20:12.491 ], 00:20:12.491 "mp_policy": "active_passive" 00:20:12.491 } 00:20:12.491 } 00:20:12.491 ]' 00:20:12.491 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:12.491 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:12.491 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:12.491 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:12.491 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:12.491 01:05:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:12.491 01:05:35 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:12.491 01:05:35 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:12.491 01:05:35 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:12.491 01:05:35 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:12.491 01:05:35 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:12.752 01:05:35 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=5de077d4-ec85-4e78-b34c-9bf87bfe4c7f 00:20:12.753 01:05:35 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:12.753 01:05:35 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5de077d4-ec85-4e78-b34c-9bf87bfe4c7f 00:20:13.012 01:05:35 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:13.272 01:05:35 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=c28bb4a5-06d2-4976-b681-468bf012e66a 00:20:13.272 01:05:35 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c28bb4a5-06d2-4976-b681-468bf012e66a 00:20:13.272 01:05:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=a1301968-31c8-4df1-bfad-46043de72262 00:20:13.272 01:05:36 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:13.272 01:05:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a1301968-31c8-4df1-bfad-46043de72262 00:20:13.272 01:05:36 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:13.272 01:05:36 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:13.272 01:05:36 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=a1301968-31c8-4df1-bfad-46043de72262 00:20:13.272 01:05:36 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:13.272 01:05:36 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size a1301968-31c8-4df1-bfad-46043de72262 00:20:13.272 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a1301968-31c8-4df1-bfad-46043de72262 00:20:13.272 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:13.272 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:13.272 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:13.533 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a1301968-31c8-4df1-bfad-46043de72262 00:20:13.533 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:13.533 { 00:20:13.533 "name": "a1301968-31c8-4df1-bfad-46043de72262", 00:20:13.533 "aliases": [ 00:20:13.533 "lvs/nvme0n1p0" 00:20:13.533 ], 00:20:13.533 "product_name": "Logical Volume", 00:20:13.533 "block_size": 4096, 00:20:13.533 "num_blocks": 26476544, 00:20:13.533 "uuid": "a1301968-31c8-4df1-bfad-46043de72262", 00:20:13.533 "assigned_rate_limits": { 00:20:13.533 "rw_ios_per_sec": 0, 00:20:13.533 "rw_mbytes_per_sec": 0, 00:20:13.533 "r_mbytes_per_sec": 0, 00:20:13.533 "w_mbytes_per_sec": 0 00:20:13.533 }, 00:20:13.533 "claimed": false, 00:20:13.533 "zoned": false, 00:20:13.533 "supported_io_types": { 00:20:13.533 "read": true, 00:20:13.533 "write": true, 00:20:13.533 "unmap": true, 00:20:13.533 "flush": false, 00:20:13.533 "reset": true, 00:20:13.533 "nvme_admin": false, 00:20:13.533 "nvme_io": false, 00:20:13.533 "nvme_io_md": false, 00:20:13.533 "write_zeroes": true, 00:20:13.533 "zcopy": false, 00:20:13.533 "get_zone_info": false, 00:20:13.533 "zone_management": false, 00:20:13.533 "zone_append": false, 00:20:13.533 "compare": false, 00:20:13.533 "compare_and_write": false, 00:20:13.533 "abort": false, 00:20:13.533 "seek_hole": true, 00:20:13.533 "seek_data": true, 00:20:13.533 "copy": false, 00:20:13.533 "nvme_iov_md": false 00:20:13.533 }, 00:20:13.533 "driver_specific": { 00:20:13.533 "lvol": { 00:20:13.533 "lvol_store_uuid": "c28bb4a5-06d2-4976-b681-468bf012e66a", 00:20:13.533 "base_bdev": "nvme0n1", 00:20:13.533 "thin_provision": true, 00:20:13.533 "num_allocated_clusters": 0, 00:20:13.533 "snapshot": false, 00:20:13.533 "clone": false, 00:20:13.533 "esnap_clone": false 00:20:13.533 } 00:20:13.533 } 00:20:13.533 } 00:20:13.533 ]' 00:20:13.533 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:13.533 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:13.533 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:13.794 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:13.794 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:13.794 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:13.794 01:05:36 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:13.794 01:05:36 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:13.794 01:05:36 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:14.055 01:05:36 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:14.055 01:05:36 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:14.055 01:05:36 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size a1301968-31c8-4df1-bfad-46043de72262 00:20:14.055 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a1301968-31c8-4df1-bfad-46043de72262 00:20:14.055 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:14.055 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:14.055 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:14.055 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a1301968-31c8-4df1-bfad-46043de72262 00:20:14.055 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:14.055 { 00:20:14.056 "name": "a1301968-31c8-4df1-bfad-46043de72262", 00:20:14.056 "aliases": [ 00:20:14.056 "lvs/nvme0n1p0" 00:20:14.056 ], 00:20:14.056 "product_name": "Logical Volume", 00:20:14.056 "block_size": 4096, 00:20:14.056 "num_blocks": 26476544, 00:20:14.056 "uuid": "a1301968-31c8-4df1-bfad-46043de72262", 00:20:14.056 "assigned_rate_limits": { 00:20:14.056 "rw_ios_per_sec": 0, 00:20:14.056 "rw_mbytes_per_sec": 0, 00:20:14.056 "r_mbytes_per_sec": 0, 00:20:14.056 "w_mbytes_per_sec": 0 00:20:14.056 }, 00:20:14.056 "claimed": false, 00:20:14.056 "zoned": false, 00:20:14.056 "supported_io_types": { 00:20:14.056 "read": true, 00:20:14.056 "write": true, 00:20:14.056 "unmap": true, 00:20:14.056 "flush": false, 00:20:14.056 "reset": true, 00:20:14.056 "nvme_admin": false, 00:20:14.056 "nvme_io": false, 00:20:14.056 "nvme_io_md": false, 00:20:14.056 "write_zeroes": true, 00:20:14.056 "zcopy": false, 00:20:14.056 "get_zone_info": false, 00:20:14.056 "zone_management": false, 00:20:14.056 "zone_append": false, 00:20:14.056 "compare": false, 00:20:14.056 "compare_and_write": false, 00:20:14.056 "abort": false, 00:20:14.056 "seek_hole": true, 00:20:14.056 "seek_data": true, 00:20:14.056 "copy": false, 00:20:14.056 "nvme_iov_md": false 00:20:14.056 }, 00:20:14.056 "driver_specific": { 00:20:14.056 "lvol": { 00:20:14.056 "lvol_store_uuid": "c28bb4a5-06d2-4976-b681-468bf012e66a", 00:20:14.056 "base_bdev": "nvme0n1", 00:20:14.056 "thin_provision": true, 00:20:14.056 "num_allocated_clusters": 0, 00:20:14.056 "snapshot": false, 00:20:14.056 "clone": false, 00:20:14.056 "esnap_clone": false 00:20:14.056 } 00:20:14.056 } 00:20:14.056 } 00:20:14.056 ]' 00:20:14.056 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:14.056 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:14.056 01:05:36 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:14.316 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:14.316 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:14.316 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:14.316 01:05:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:14.316 01:05:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:14.316 01:05:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:14.316 01:05:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size a1301968-31c8-4df1-bfad-46043de72262 00:20:14.316 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a1301968-31c8-4df1-bfad-46043de72262 00:20:14.316 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:14.316 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:14.316 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:14.316 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a1301968-31c8-4df1-bfad-46043de72262 00:20:14.577 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:14.577 { 00:20:14.577 "name": "a1301968-31c8-4df1-bfad-46043de72262", 00:20:14.577 "aliases": [ 00:20:14.577 "lvs/nvme0n1p0" 00:20:14.577 ], 00:20:14.577 "product_name": "Logical Volume", 00:20:14.577 "block_size": 4096, 00:20:14.577 "num_blocks": 26476544, 00:20:14.577 "uuid": "a1301968-31c8-4df1-bfad-46043de72262", 00:20:14.577 "assigned_rate_limits": { 00:20:14.577 "rw_ios_per_sec": 0, 00:20:14.577 "rw_mbytes_per_sec": 0, 00:20:14.577 "r_mbytes_per_sec": 0, 00:20:14.577 "w_mbytes_per_sec": 0 00:20:14.577 }, 00:20:14.577 "claimed": false, 00:20:14.577 "zoned": false, 00:20:14.577 "supported_io_types": { 00:20:14.577 "read": true, 00:20:14.577 "write": true, 00:20:14.577 "unmap": true, 00:20:14.577 "flush": false, 00:20:14.577 "reset": true, 00:20:14.577 "nvme_admin": false, 00:20:14.577 "nvme_io": false, 00:20:14.577 "nvme_io_md": false, 00:20:14.577 "write_zeroes": true, 00:20:14.577 "zcopy": false, 00:20:14.577 "get_zone_info": false, 00:20:14.577 "zone_management": false, 00:20:14.577 "zone_append": false, 00:20:14.577 "compare": false, 00:20:14.578 "compare_and_write": false, 00:20:14.578 "abort": false, 00:20:14.578 "seek_hole": true, 00:20:14.578 "seek_data": true, 00:20:14.578 "copy": false, 00:20:14.578 "nvme_iov_md": false 00:20:14.578 }, 00:20:14.578 "driver_specific": { 00:20:14.578 "lvol": { 00:20:14.578 "lvol_store_uuid": "c28bb4a5-06d2-4976-b681-468bf012e66a", 00:20:14.578 "base_bdev": "nvme0n1", 00:20:14.578 "thin_provision": true, 00:20:14.578 "num_allocated_clusters": 0, 00:20:14.578 "snapshot": false, 00:20:14.578 "clone": false, 00:20:14.578 "esnap_clone": false 00:20:14.578 } 00:20:14.578 } 00:20:14.578 } 00:20:14.578 ]' 00:20:14.578 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:14.578 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:14.578 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:14.578 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:14.578 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:14.578 01:05:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:14.578 01:05:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:14.578 01:05:37 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a1301968-31c8-4df1-bfad-46043de72262 --l2p_dram_limit 10' 00:20:14.578 01:05:37 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:14.578 01:05:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:14.578 01:05:37 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:14.578 01:05:37 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:14.578 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:14.578 01:05:37 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a1301968-31c8-4df1-bfad-46043de72262 --l2p_dram_limit 10 -c nvc0n1p0 00:20:14.840 [2024-11-26 01:05:37.654885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.840 [2024-11-26 01:05:37.654921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:14.840 [2024-11-26 01:05:37.654935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:14.840 [2024-11-26 01:05:37.654942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.840 [2024-11-26 01:05:37.654981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.840 [2024-11-26 01:05:37.654991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.840 [2024-11-26 01:05:37.655001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:14.840 [2024-11-26 01:05:37.655007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.840 [2024-11-26 01:05:37.655027] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:14.840 [2024-11-26 01:05:37.655218] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:14.840 [2024-11-26 01:05:37.655233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.840 [2024-11-26 01:05:37.655240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.840 [2024-11-26 01:05:37.655249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:20:14.840 [2024-11-26 01:05:37.655255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.840 [2024-11-26 01:05:37.655279] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 24fe6672-7f6f-41a2-a551-9dd1d146a529 00:20:14.840 [2024-11-26 01:05:37.656518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.840 [2024-11-26 01:05:37.656540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:14.840 [2024-11-26 01:05:37.656549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:14.840 [2024-11-26 01:05:37.656561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.840 [2024-11-26 01:05:37.663379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.840 [2024-11-26 01:05:37.663403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.840 [2024-11-26 01:05:37.663411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.780 ms 00:20:14.840 [2024-11-26 01:05:37.663422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.840 [2024-11-26 01:05:37.663525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.840 [2024-11-26 01:05:37.663538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.840 [2024-11-26 01:05:37.663545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:14.840 [2024-11-26 01:05:37.663557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.840 [2024-11-26 01:05:37.663592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.840 [2024-11-26 01:05:37.663602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:14.840 [2024-11-26 01:05:37.663609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:14.840 [2024-11-26 01:05:37.663616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.840 [2024-11-26 01:05:37.663635] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.840 [2024-11-26 01:05:37.665247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.840 [2024-11-26 01:05:37.665270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.840 [2024-11-26 01:05:37.665280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:20:14.840 [2024-11-26 01:05:37.665288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.840 [2024-11-26 01:05:37.665317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.840 [2024-11-26 01:05:37.665324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:14.840 [2024-11-26 01:05:37.665335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:14.840 [2024-11-26 01:05:37.665342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.840 [2024-11-26 01:05:37.665358] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:14.840 [2024-11-26 01:05:37.665469] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:14.840 [2024-11-26 01:05:37.665479] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:14.841 [2024-11-26 01:05:37.665489] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:14.841 [2024-11-26 01:05:37.665512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:14.841 [2024-11-26 01:05:37.665519] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:14.841 [2024-11-26 01:05:37.665529] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:14.841 [2024-11-26 01:05:37.665535] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:14.841 [2024-11-26 01:05:37.665542] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:14.841 [2024-11-26 01:05:37.665547] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:14.841 [2024-11-26 01:05:37.665555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.841 [2024-11-26 01:05:37.665560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:14.841 [2024-11-26 01:05:37.665568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:20:14.841 [2024-11-26 01:05:37.665573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.841 [2024-11-26 01:05:37.665639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.841 [2024-11-26 01:05:37.665645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:14.841 [2024-11-26 01:05:37.665652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:14.841 [2024-11-26 01:05:37.665662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.841 [2024-11-26 01:05:37.665733] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:14.841 [2024-11-26 01:05:37.665739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:14.841 [2024-11-26 01:05:37.665747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.841 [2024-11-26 01:05:37.665752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:14.841 [2024-11-26 01:05:37.665765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:14.841 [2024-11-26 01:05:37.665777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:14.841 [2024-11-26 01:05:37.665784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.841 [2024-11-26 01:05:37.665795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:14.841 [2024-11-26 01:05:37.665800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:14.841 [2024-11-26 01:05:37.665809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:14.841 [2024-11-26 01:05:37.665814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:14.841 [2024-11-26 01:05:37.665820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:14.841 [2024-11-26 01:05:37.665825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:14.841 [2024-11-26 01:05:37.665837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:14.841 [2024-11-26 01:05:37.665853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:14.841 [2024-11-26 01:05:37.665867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.841 [2024-11-26 01:05:37.665878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:14.841 [2024-11-26 01:05:37.665883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.841 [2024-11-26 01:05:37.665896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:14.841 [2024-11-26 01:05:37.665902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.841 [2024-11-26 01:05:37.665915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:14.841 [2024-11-26 01:05:37.665920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:14.841 [2024-11-26 01:05:37.665932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:14.841 [2024-11-26 01:05:37.665939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.841 [2024-11-26 01:05:37.665950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:14.841 [2024-11-26 01:05:37.665956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:14.841 [2024-11-26 01:05:37.665964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:14.841 [2024-11-26 01:05:37.665970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:14.841 [2024-11-26 01:05:37.665978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:14.841 [2024-11-26 01:05:37.665983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.841 [2024-11-26 01:05:37.665990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:14.841 [2024-11-26 01:05:37.665996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:14.841 [2024-11-26 01:05:37.666002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.841 [2024-11-26 01:05:37.666007] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:14.841 [2024-11-26 01:05:37.666015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:14.841 [2024-11-26 01:05:37.666021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:14.841 [2024-11-26 01:05:37.666028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:14.841 [2024-11-26 01:05:37.666035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:14.841 [2024-11-26 01:05:37.666041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:14.841 [2024-11-26 01:05:37.666054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:14.841 [2024-11-26 01:05:37.666061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:14.841 [2024-11-26 01:05:37.666065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:14.841 [2024-11-26 01:05:37.666072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:14.841 [2024-11-26 01:05:37.666080] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:14.841 [2024-11-26 01:05:37.666089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.841 [2024-11-26 01:05:37.666095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:14.841 [2024-11-26 01:05:37.666103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:14.841 [2024-11-26 01:05:37.666109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:14.841 [2024-11-26 01:05:37.666116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:14.841 [2024-11-26 01:05:37.666121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:14.841 [2024-11-26 01:05:37.666130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:14.841 [2024-11-26 01:05:37.666135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:14.841 [2024-11-26 01:05:37.666142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:14.841 [2024-11-26 01:05:37.666147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:14.841 [2024-11-26 01:05:37.666154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:14.841 [2024-11-26 01:05:37.666159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:14.841 [2024-11-26 01:05:37.666166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:14.841 [2024-11-26 01:05:37.666172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:14.841 [2024-11-26 01:05:37.666179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:14.841 [2024-11-26 01:05:37.666185] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:14.841 [2024-11-26 01:05:37.666192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:14.841 [2024-11-26 01:05:37.666198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:14.841 [2024-11-26 01:05:37.666205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:14.841 [2024-11-26 01:05:37.666211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:14.841 [2024-11-26 01:05:37.666218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:14.841 [2024-11-26 01:05:37.666224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.841 [2024-11-26 01:05:37.666232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:14.841 [2024-11-26 01:05:37.666238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:20:14.841 [2024-11-26 01:05:37.666245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.841 [2024-11-26 01:05:37.666275] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:14.841 [2024-11-26 01:05:37.666283] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:19.051 [2024-11-26 01:05:41.248424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.248481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:19.051 [2024-11-26 01:05:41.248495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3582.135 ms 00:20:19.051 [2024-11-26 01:05:41.248504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.259008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.259051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.051 [2024-11-26 01:05:41.259063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.427 ms 00:20:19.051 [2024-11-26 01:05:41.259078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.259147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.259157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.051 [2024-11-26 01:05:41.259164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:19.051 [2024-11-26 01:05:41.259173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.269141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.269176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.051 [2024-11-26 01:05:41.269184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.932 ms 00:20:19.051 [2024-11-26 01:05:41.269195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.269218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.269226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.051 [2024-11-26 01:05:41.269233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:19.051 [2024-11-26 01:05:41.269241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.269641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.269664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.051 [2024-11-26 01:05:41.269675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:20:19.051 [2024-11-26 01:05:41.269685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.269775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.269784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.051 [2024-11-26 01:05:41.269791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:19.051 [2024-11-26 01:05:41.269799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.276593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.276620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.051 [2024-11-26 01:05:41.276628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.779 ms 00:20:19.051 [2024-11-26 01:05:41.276636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.284084] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:19.051 [2024-11-26 01:05:41.287057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.287078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.051 [2024-11-26 01:05:41.287089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.364 ms 00:20:19.051 [2024-11-26 01:05:41.287096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.366563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.051 [2024-11-26 01:05:41.366597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:19.051 [2024-11-26 01:05:41.366611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.442 ms 00:20:19.051 [2024-11-26 01:05:41.366618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.051 [2024-11-26 01:05:41.366771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.366780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.052 [2024-11-26 01:05:41.366789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:20:19.052 [2024-11-26 01:05:41.366796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.370555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.370580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:19.052 [2024-11-26 01:05:41.370593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.741 ms 00:20:19.052 [2024-11-26 01:05:41.370599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.373856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.373879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:19.052 [2024-11-26 01:05:41.373888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:20:19.052 [2024-11-26 01:05:41.373894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.374137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.374147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.052 [2024-11-26 01:05:41.374158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:20:19.052 [2024-11-26 01:05:41.374165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.405679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.405705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:19.052 [2024-11-26 01:05:41.405718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.495 ms 00:20:19.052 [2024-11-26 01:05:41.405724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.410383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.410406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:19.052 [2024-11-26 01:05:41.410416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.619 ms 00:20:19.052 [2024-11-26 01:05:41.410423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.413958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.413981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:19.052 [2024-11-26 01:05:41.413990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.504 ms 00:20:19.052 [2024-11-26 01:05:41.413995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.418259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.418283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.052 [2024-11-26 01:05:41.418294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.234 ms 00:20:19.052 [2024-11-26 01:05:41.418300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.418332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.418340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.052 [2024-11-26 01:05:41.418348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:19.052 [2024-11-26 01:05:41.418354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.418416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.418424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.052 [2024-11-26 01:05:41.418434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:19.052 [2024-11-26 01:05:41.418441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.419221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3763.979 ms, result 0 00:20:19.052 { 00:20:19.052 "name": "ftl0", 00:20:19.052 "uuid": "24fe6672-7f6f-41a2-a551-9dd1d146a529" 00:20:19.052 } 00:20:19.052 01:05:41 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:19.052 01:05:41 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:19.052 01:05:41 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:19.052 01:05:41 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:19.052 [2024-11-26 01:05:41.824388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.824428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:19.052 [2024-11-26 01:05:41.824437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:19.052 [2024-11-26 01:05:41.824445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.824464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.052 [2024-11-26 01:05:41.825038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.825053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:19.052 [2024-11-26 01:05:41.825062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:20:19.052 [2024-11-26 01:05:41.825068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.825264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.825279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:19.052 [2024-11-26 01:05:41.825291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:20:19.052 [2024-11-26 01:05:41.825298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.827722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.827738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:19.052 [2024-11-26 01:05:41.827747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:20:19.052 [2024-11-26 01:05:41.827755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.832417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.832436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:19.052 [2024-11-26 01:05:41.832446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.647 ms 00:20:19.052 [2024-11-26 01:05:41.832454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.834623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.834647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:19.052 [2024-11-26 01:05:41.834656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.116 ms 00:20:19.052 [2024-11-26 01:05:41.834662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.839603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.839628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:19.052 [2024-11-26 01:05:41.839637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.910 ms 00:20:19.052 [2024-11-26 01:05:41.839651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.839746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.839756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:19.052 [2024-11-26 01:05:41.839765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:19.052 [2024-11-26 01:05:41.839771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.842584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.842607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:19.052 [2024-11-26 01:05:41.842616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.796 ms 00:20:19.052 [2024-11-26 01:05:41.842621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.844636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.844657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:19.052 [2024-11-26 01:05:41.844666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.983 ms 00:20:19.052 [2024-11-26 01:05:41.844672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.846118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.846140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:19.052 [2024-11-26 01:05:41.846149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.417 ms 00:20:19.052 [2024-11-26 01:05:41.846154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.847792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.052 [2024-11-26 01:05:41.847814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:19.052 [2024-11-26 01:05:41.847823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:20:19.052 [2024-11-26 01:05:41.847829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.052 [2024-11-26 01:05:41.847866] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:19.052 [2024-11-26 01:05:41.847878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:19.052 [2024-11-26 01:05:41.847971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.847978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.847983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.847991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.847997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:19.053 [2024-11-26 01:05:41.848589] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:19.053 [2024-11-26 01:05:41.848597] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24fe6672-7f6f-41a2-a551-9dd1d146a529 00:20:19.053 [2024-11-26 01:05:41.848603] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:19.054 [2024-11-26 01:05:41.848611] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:19.054 [2024-11-26 01:05:41.848617] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:19.054 [2024-11-26 01:05:41.848625] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:19.054 [2024-11-26 01:05:41.848632] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:19.054 [2024-11-26 01:05:41.848640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:19.054 [2024-11-26 01:05:41.848649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:19.054 [2024-11-26 01:05:41.848656] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:19.054 [2024-11-26 01:05:41.848661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:19.054 [2024-11-26 01:05:41.848668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.054 [2024-11-26 01:05:41.848673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:19.054 [2024-11-26 01:05:41.848681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.803 ms 00:20:19.054 [2024-11-26 01:05:41.848687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.850012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.054 [2024-11-26 01:05:41.850033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:19.054 [2024-11-26 01:05:41.850044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.306 ms 00:20:19.054 [2024-11-26 01:05:41.850067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.850136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.054 [2024-11-26 01:05:41.850144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:19.054 [2024-11-26 01:05:41.850152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:19.054 [2024-11-26 01:05:41.850158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.856104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.856137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.054 [2024-11-26 01:05:41.856146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.856152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.856204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.856210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.054 [2024-11-26 01:05:41.856218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.856225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.856284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.856292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.054 [2024-11-26 01:05:41.856304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.856310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.856325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.856332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.054 [2024-11-26 01:05:41.856339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.856345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.867337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.867368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.054 [2024-11-26 01:05:41.867382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.867388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.876518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.876550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.054 [2024-11-26 01:05:41.876561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.876567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.876640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.876648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.054 [2024-11-26 01:05:41.876656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.876663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.876697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.876704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.054 [2024-11-26 01:05:41.876717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.876723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.876784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.876792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.054 [2024-11-26 01:05:41.876800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.876805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.876835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.876855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:19.054 [2024-11-26 01:05:41.876862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.876869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.876909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.876916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.054 [2024-11-26 01:05:41.876924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.876931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.876977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:19.054 [2024-11-26 01:05:41.876985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.054 [2024-11-26 01:05:41.876993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:19.054 [2024-11-26 01:05:41.877001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.054 [2024-11-26 01:05:41.877125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.692 ms, result 0 00:20:19.054 true 00:20:19.054 01:05:41 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90047 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 90047 ']' 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 90047 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90047 00:20:19.054 killing process with pid 90047 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90047' 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 90047 00:20:19.054 01:05:41 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 90047 00:20:24.346 01:05:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:28.555 262144+0 records in 00:20:28.555 262144+0 records out 00:20:28.555 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.35186 s, 247 MB/s 00:20:28.555 01:05:51 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:30.469 01:05:53 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:30.469 [2024-11-26 01:05:53.194410] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:20:30.469 [2024-11-26 01:05:53.194526] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90261 ] 00:20:30.469 [2024-11-26 01:05:53.328279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:30.469 [2024-11-26 01:05:53.356989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.729 [2024-11-26 01:05:53.397440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.729 [2024-11-26 01:05:53.544995] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.729 [2024-11-26 01:05:53.545095] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.992 [2024-11-26 01:05:53.709613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.709670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:30.992 [2024-11-26 01:05:53.709687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:30.992 [2024-11-26 01:05:53.709697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.709760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.709772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.992 [2024-11-26 01:05:53.709781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:30.992 [2024-11-26 01:05:53.709793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.709814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:30.992 [2024-11-26 01:05:53.710120] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:30.992 [2024-11-26 01:05:53.710152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.710164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.992 [2024-11-26 01:05:53.710175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:20:30.992 [2024-11-26 01:05:53.710183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.712428] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:30.992 [2024-11-26 01:05:53.717122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.717173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:30.992 [2024-11-26 01:05:53.717193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.695 ms 00:20:30.992 [2024-11-26 01:05:53.717205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.717292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.717308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:30.992 [2024-11-26 01:05:53.717322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:30.992 [2024-11-26 01:05:53.717330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.728721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.728768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.992 [2024-11-26 01:05:53.728780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.341 ms 00:20:30.992 [2024-11-26 01:05:53.728789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.728925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.728936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.992 [2024-11-26 01:05:53.728948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:20:30.992 [2024-11-26 01:05:53.728957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.729024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.729035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:30.992 [2024-11-26 01:05:53.729045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:30.992 [2024-11-26 01:05:53.729058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.729082] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:30.992 [2024-11-26 01:05:53.731756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.731795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.992 [2024-11-26 01:05:53.731805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:20:30.992 [2024-11-26 01:05:53.731813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.731865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.731874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:30.992 [2024-11-26 01:05:53.731896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:30.992 [2024-11-26 01:05:53.731907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.731932] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:30.992 [2024-11-26 01:05:53.731958] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:30.992 [2024-11-26 01:05:53.731999] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:30.992 [2024-11-26 01:05:53.732018] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:30.992 [2024-11-26 01:05:53.732132] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:30.992 [2024-11-26 01:05:53.732148] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:30.992 [2024-11-26 01:05:53.732166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:30.992 [2024-11-26 01:05:53.732178] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:30.992 [2024-11-26 01:05:53.732188] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:30.992 [2024-11-26 01:05:53.732197] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:30.992 [2024-11-26 01:05:53.732205] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:30.992 [2024-11-26 01:05:53.732213] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:30.992 [2024-11-26 01:05:53.732223] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:30.992 [2024-11-26 01:05:53.732232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.732241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:30.992 [2024-11-26 01:05:53.732252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:20:30.992 [2024-11-26 01:05:53.732265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.732348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.992 [2024-11-26 01:05:53.732358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:30.992 [2024-11-26 01:05:53.732366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:30.992 [2024-11-26 01:05:53.732374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.992 [2024-11-26 01:05:53.732474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:30.992 [2024-11-26 01:05:53.732506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:30.992 [2024-11-26 01:05:53.732522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:30.992 [2024-11-26 01:05:53.732531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:30.992 [2024-11-26 01:05:53.732555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:30.992 [2024-11-26 01:05:53.732583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:30.992 [2024-11-26 01:05:53.732594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:30.992 [2024-11-26 01:05:53.732615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:30.992 [2024-11-26 01:05:53.732627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:30.992 [2024-11-26 01:05:53.732636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:30.992 [2024-11-26 01:05:53.732645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:30.992 [2024-11-26 01:05:53.732653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:30.992 [2024-11-26 01:05:53.732662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:30.992 [2024-11-26 01:05:53.732680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:30.992 [2024-11-26 01:05:53.732688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:30.992 [2024-11-26 01:05:53.732704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.992 [2024-11-26 01:05:53.732720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:30.992 [2024-11-26 01:05:53.732728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.992 [2024-11-26 01:05:53.732749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:30.992 [2024-11-26 01:05:53.732757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.992 [2024-11-26 01:05:53.732772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:30.992 [2024-11-26 01:05:53.732780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.992 [2024-11-26 01:05:53.732794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:30.992 [2024-11-26 01:05:53.732802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:30.992 [2024-11-26 01:05:53.732810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:30.992 [2024-11-26 01:05:53.732818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:30.993 [2024-11-26 01:05:53.732825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:30.993 [2024-11-26 01:05:53.732831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:30.993 [2024-11-26 01:05:53.732839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:30.993 [2024-11-26 01:05:53.732862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:30.993 [2024-11-26 01:05:53.732869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.993 [2024-11-26 01:05:53.732878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:30.993 [2024-11-26 01:05:53.732889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:30.993 [2024-11-26 01:05:53.732896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.993 [2024-11-26 01:05:53.732907] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:30.993 [2024-11-26 01:05:53.732916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:30.993 [2024-11-26 01:05:53.732926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:30.993 [2024-11-26 01:05:53.732935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.993 [2024-11-26 01:05:53.732944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:30.993 [2024-11-26 01:05:53.732954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:30.993 [2024-11-26 01:05:53.732961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:30.993 [2024-11-26 01:05:53.732968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:30.993 [2024-11-26 01:05:53.732975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:30.993 [2024-11-26 01:05:53.732985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:30.993 [2024-11-26 01:05:53.732996] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:30.993 [2024-11-26 01:05:53.733008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:30.993 [2024-11-26 01:05:53.733018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:30.993 [2024-11-26 01:05:53.733026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:30.993 [2024-11-26 01:05:53.733037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:30.993 [2024-11-26 01:05:53.733045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:30.993 [2024-11-26 01:05:53.733053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:30.993 [2024-11-26 01:05:53.733061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:30.993 [2024-11-26 01:05:53.733070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:30.993 [2024-11-26 01:05:53.733078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:30.993 [2024-11-26 01:05:53.733086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:30.993 [2024-11-26 01:05:53.733094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:30.993 [2024-11-26 01:05:53.733101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:30.993 [2024-11-26 01:05:53.733107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:30.993 [2024-11-26 01:05:53.733113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:30.993 [2024-11-26 01:05:53.733121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:30.993 [2024-11-26 01:05:53.733127] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:30.993 [2024-11-26 01:05:53.733135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:30.993 [2024-11-26 01:05:53.733147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:30.993 [2024-11-26 01:05:53.733154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:30.993 [2024-11-26 01:05:53.733165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:30.993 [2024-11-26 01:05:53.733172] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:30.993 [2024-11-26 01:05:53.733182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.733192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:30.993 [2024-11-26 01:05:53.733204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:20:30.993 [2024-11-26 01:05:53.733215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.753124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.753170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.993 [2024-11-26 01:05:53.753183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.844 ms 00:20:30.993 [2024-11-26 01:05:53.753198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.753293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.753303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:30.993 [2024-11-26 01:05:53.753313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:30.993 [2024-11-26 01:05:53.753324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.781364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.781440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.993 [2024-11-26 01:05:53.781464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.976 ms 00:20:30.993 [2024-11-26 01:05:53.781481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.781554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.781574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:30.993 [2024-11-26 01:05:53.781601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:30.993 [2024-11-26 01:05:53.781623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.782493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.782549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:30.993 [2024-11-26 01:05:53.782570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:20:30.993 [2024-11-26 01:05:53.782586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.782881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.782903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:30.993 [2024-11-26 01:05:53.782919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:20:30.993 [2024-11-26 01:05:53.782933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.794267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.794310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:30.993 [2024-11-26 01:05:53.794322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.285 ms 00:20:30.993 [2024-11-26 01:05:53.794343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.799085] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:30.993 [2024-11-26 01:05:53.799138] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:30.993 [2024-11-26 01:05:53.799153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.799163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:30.993 [2024-11-26 01:05:53.799172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.700 ms 00:20:30.993 [2024-11-26 01:05:53.799180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.815383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.815441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:30.993 [2024-11-26 01:05:53.815454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.150 ms 00:20:30.993 [2024-11-26 01:05:53.815462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.818540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.818592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:30.993 [2024-11-26 01:05:53.818603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:20:30.993 [2024-11-26 01:05:53.818611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.821319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.821365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:30.993 [2024-11-26 01:05:53.821375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.662 ms 00:20:30.993 [2024-11-26 01:05:53.821394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.821743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.821768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:30.993 [2024-11-26 01:05:53.821785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:30.993 [2024-11-26 01:05:53.821800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.853192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.853254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:30.993 [2024-11-26 01:05:53.853268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.369 ms 00:20:30.993 [2024-11-26 01:05:53.853278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.993 [2024-11-26 01:05:53.861898] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:30.993 [2024-11-26 01:05:53.865194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.993 [2024-11-26 01:05:53.865249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:30.993 [2024-11-26 01:05:53.865261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.865 ms 00:20:30.994 [2024-11-26 01:05:53.865271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.994 [2024-11-26 01:05:53.865349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.994 [2024-11-26 01:05:53.865361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:30.994 [2024-11-26 01:05:53.865377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:30.994 [2024-11-26 01:05:53.865387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.994 [2024-11-26 01:05:53.865466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.994 [2024-11-26 01:05:53.865480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:30.994 [2024-11-26 01:05:53.865493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:30.994 [2024-11-26 01:05:53.865502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.994 [2024-11-26 01:05:53.865527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.994 [2024-11-26 01:05:53.865537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:30.994 [2024-11-26 01:05:53.865555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:30.994 [2024-11-26 01:05:53.865563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.994 [2024-11-26 01:05:53.865607] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:30.994 [2024-11-26 01:05:53.865620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.994 [2024-11-26 01:05:53.865629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:30.994 [2024-11-26 01:05:53.865638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:30.994 [2024-11-26 01:05:53.865654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.994 [2024-11-26 01:05:53.871887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.994 [2024-11-26 01:05:53.871945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:30.994 [2024-11-26 01:05:53.871958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.211 ms 00:20:30.994 [2024-11-26 01:05:53.871971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.994 [2024-11-26 01:05:53.872063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.994 [2024-11-26 01:05:53.872074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:30.994 [2024-11-26 01:05:53.872085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:30.994 [2024-11-26 01:05:53.872094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.994 [2024-11-26 01:05:53.873478] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.349 ms, result 0 00:20:32.382  [2024-11-26T01:05:56.240Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-26T01:05:57.187Z] Copying: 38/1024 [MB] (19 MBps) [2024-11-26T01:05:58.132Z] Copying: 57/1024 [MB] (18 MBps) [2024-11-26T01:05:59.079Z] Copying: 75/1024 [MB] (18 MBps) [2024-11-26T01:06:00.022Z] Copying: 93/1024 [MB] (17 MBps) [2024-11-26T01:06:01.071Z] Copying: 111/1024 [MB] (17 MBps) [2024-11-26T01:06:02.013Z] Copying: 132/1024 [MB] (20 MBps) [2024-11-26T01:06:02.955Z] Copying: 144/1024 [MB] (12 MBps) [2024-11-26T01:06:03.900Z] Copying: 160/1024 [MB] (15 MBps) [2024-11-26T01:06:05.287Z] Copying: 172/1024 [MB] (12 MBps) [2024-11-26T01:06:06.228Z] Copying: 184/1024 [MB] (12 MBps) [2024-11-26T01:06:07.174Z] Copying: 206/1024 [MB] (21 MBps) [2024-11-26T01:06:08.120Z] Copying: 220/1024 [MB] (14 MBps) [2024-11-26T01:06:09.065Z] Copying: 236/1024 [MB] (15 MBps) [2024-11-26T01:06:10.011Z] Copying: 246/1024 [MB] (10 MBps) [2024-11-26T01:06:10.956Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-26T01:06:11.897Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-26T01:06:13.282Z] Copying: 277/1024 [MB] (10 MBps) [2024-11-26T01:06:14.224Z] Copying: 294548/1048576 [kB] (10204 kBps) [2024-11-26T01:06:15.168Z] Copying: 304712/1048576 [kB] (10164 kBps) [2024-11-26T01:06:16.113Z] Copying: 307/1024 [MB] (10 MBps) [2024-11-26T01:06:17.059Z] Copying: 318/1024 [MB] (10 MBps) [2024-11-26T01:06:18.005Z] Copying: 331/1024 [MB] (13 MBps) [2024-11-26T01:06:18.950Z] Copying: 341/1024 [MB] (10 MBps) [2024-11-26T01:06:19.891Z] Copying: 359/1024 [MB] (17 MBps) [2024-11-26T01:06:21.275Z] Copying: 374/1024 [MB] (14 MBps) [2024-11-26T01:06:22.219Z] Copying: 389/1024 [MB] (15 MBps) [2024-11-26T01:06:23.161Z] Copying: 400/1024 [MB] (10 MBps) [2024-11-26T01:06:24.102Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-26T01:06:25.043Z] Copying: 420/1024 [MB] (10 MBps) [2024-11-26T01:06:25.989Z] Copying: 431/1024 [MB] (10 MBps) [2024-11-26T01:06:26.934Z] Copying: 442/1024 [MB] (11 MBps) [2024-11-26T01:06:28.322Z] Copying: 460/1024 [MB] (17 MBps) [2024-11-26T01:06:28.897Z] Copying: 473/1024 [MB] (13 MBps) [2024-11-26T01:06:30.285Z] Copying: 483/1024 [MB] (10 MBps) [2024-11-26T01:06:31.231Z] Copying: 499/1024 [MB] (15 MBps) [2024-11-26T01:06:32.177Z] Copying: 510/1024 [MB] (11 MBps) [2024-11-26T01:06:33.119Z] Copying: 522/1024 [MB] (12 MBps) [2024-11-26T01:06:34.066Z] Copying: 536/1024 [MB] (14 MBps) [2024-11-26T01:06:35.011Z] Copying: 549/1024 [MB] (12 MBps) [2024-11-26T01:06:36.041Z] Copying: 562/1024 [MB] (13 MBps) [2024-11-26T01:06:36.982Z] Copying: 574/1024 [MB] (12 MBps) [2024-11-26T01:06:37.930Z] Copying: 591/1024 [MB] (16 MBps) [2024-11-26T01:06:39.317Z] Copying: 603/1024 [MB] (12 MBps) [2024-11-26T01:06:39.891Z] Copying: 617/1024 [MB] (14 MBps) [2024-11-26T01:06:41.280Z] Copying: 628/1024 [MB] (10 MBps) [2024-11-26T01:06:42.230Z] Copying: 639/1024 [MB] (11 MBps) [2024-11-26T01:06:43.170Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-26T01:06:44.108Z] Copying: 666/1024 [MB] (16 MBps) [2024-11-26T01:06:45.055Z] Copying: 717/1024 [MB] (51 MBps) [2024-11-26T01:06:45.999Z] Copying: 739/1024 [MB] (21 MBps) [2024-11-26T01:06:46.939Z] Copying: 750/1024 [MB] (10 MBps) [2024-11-26T01:06:48.326Z] Copying: 764/1024 [MB] (14 MBps) [2024-11-26T01:06:48.900Z] Copying: 779/1024 [MB] (14 MBps) [2024-11-26T01:06:50.286Z] Copying: 792/1024 [MB] (13 MBps) [2024-11-26T01:06:51.229Z] Copying: 811/1024 [MB] (18 MBps) [2024-11-26T01:06:52.176Z] Copying: 828/1024 [MB] (16 MBps) [2024-11-26T01:06:53.121Z] Copying: 842/1024 [MB] (14 MBps) [2024-11-26T01:06:54.065Z] Copying: 856/1024 [MB] (14 MBps) [2024-11-26T01:06:55.008Z] Copying: 870/1024 [MB] (13 MBps) [2024-11-26T01:06:55.952Z] Copying: 885/1024 [MB] (15 MBps) [2024-11-26T01:06:56.894Z] Copying: 902/1024 [MB] (16 MBps) [2024-11-26T01:06:58.281Z] Copying: 914/1024 [MB] (11 MBps) [2024-11-26T01:06:59.223Z] Copying: 928/1024 [MB] (14 MBps) [2024-11-26T01:07:00.168Z] Copying: 942/1024 [MB] (14 MBps) [2024-11-26T01:07:01.112Z] Copying: 957/1024 [MB] (15 MBps) [2024-11-26T01:07:02.058Z] Copying: 967/1024 [MB] (10 MBps) [2024-11-26T01:07:03.003Z] Copying: 1001296/1048576 [kB] (10216 kBps) [2024-11-26T01:07:03.950Z] Copying: 987/1024 [MB] (10 MBps) [2024-11-26T01:07:04.893Z] Copying: 998/1024 [MB] (10 MBps) [2024-11-26T01:07:05.153Z] Copying: 1011/1024 [MB] (13 MBps) [2024-11-26T01:07:05.153Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-26 01:07:05.118111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.236 [2024-11-26 01:07:05.118146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:42.236 [2024-11-26 01:07:05.118157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:42.236 [2024-11-26 01:07:05.118167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.118188] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:42.237 [2024-11-26 01:07:05.118579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.118607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:42.237 [2024-11-26 01:07:05.118615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:21:42.237 [2024-11-26 01:07:05.118621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.120024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.120054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:42.237 [2024-11-26 01:07:05.120062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:21:42.237 [2024-11-26 01:07:05.120069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.130893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.130921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:42.237 [2024-11-26 01:07:05.130930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.808 ms 00:21:42.237 [2024-11-26 01:07:05.130936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.135826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.135993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:42.237 [2024-11-26 01:07:05.136008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.867 ms 00:21:42.237 [2024-11-26 01:07:05.136015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.136922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.136951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:42.237 [2024-11-26 01:07:05.136959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:21:42.237 [2024-11-26 01:07:05.136964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.140368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.140400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:42.237 [2024-11-26 01:07:05.140407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:21:42.237 [2024-11-26 01:07:05.140413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.140497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.140505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:42.237 [2024-11-26 01:07:05.140511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:42.237 [2024-11-26 01:07:05.140516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.142234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.142260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:42.237 [2024-11-26 01:07:05.142274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:21:42.237 [2024-11-26 01:07:05.142280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.143602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.143630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:42.237 [2024-11-26 01:07:05.143636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:21:42.237 [2024-11-26 01:07:05.143641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.144587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.144615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:42.237 [2024-11-26 01:07:05.144622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:21:42.237 [2024-11-26 01:07:05.144627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.145745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.237 [2024-11-26 01:07:05.145772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:42.237 [2024-11-26 01:07:05.145779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:21:42.237 [2024-11-26 01:07:05.145784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.237 [2024-11-26 01:07:05.145806] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:42.237 [2024-11-26 01:07:05.145817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.145997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:42.237 [2024-11-26 01:07:05.146133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:42.238 [2024-11-26 01:07:05.146422] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:42.238 [2024-11-26 01:07:05.146428] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24fe6672-7f6f-41a2-a551-9dd1d146a529 00:21:42.238 [2024-11-26 01:07:05.146435] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:42.238 [2024-11-26 01:07:05.146440] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:42.238 [2024-11-26 01:07:05.146446] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:42.238 [2024-11-26 01:07:05.146453] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:42.238 [2024-11-26 01:07:05.146458] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:42.238 [2024-11-26 01:07:05.146465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:42.238 [2024-11-26 01:07:05.146471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:42.238 [2024-11-26 01:07:05.146475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:42.238 [2024-11-26 01:07:05.146480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:42.238 [2024-11-26 01:07:05.146486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.238 [2024-11-26 01:07:05.146498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:42.238 [2024-11-26 01:07:05.146509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:21:42.238 [2024-11-26 01:07:05.146515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.238 [2024-11-26 01:07:05.147758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.238 [2024-11-26 01:07:05.147780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:42.238 [2024-11-26 01:07:05.147788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.232 ms 00:21:42.238 [2024-11-26 01:07:05.147794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.238 [2024-11-26 01:07:05.147879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.238 [2024-11-26 01:07:05.147887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:42.238 [2024-11-26 01:07:05.147897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:42.238 [2024-11-26 01:07:05.147903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.238 [2024-11-26 01:07:05.152091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.238 [2024-11-26 01:07:05.152123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.238 [2024-11-26 01:07:05.152130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.238 [2024-11-26 01:07:05.152136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.238 [2024-11-26 01:07:05.152184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.238 [2024-11-26 01:07:05.152191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.238 [2024-11-26 01:07:05.152196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.238 [2024-11-26 01:07:05.152202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.499 [2024-11-26 01:07:05.152237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.152244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.500 [2024-11-26 01:07:05.152250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.152255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.152266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.152279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.500 [2024-11-26 01:07:05.152285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.152290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.159788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.159820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.500 [2024-11-26 01:07:05.159828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.159833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.165997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.166029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.500 [2024-11-26 01:07:05.166052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.166059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.166092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.166100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.500 [2024-11-26 01:07:05.166106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.166111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.166130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.166136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.500 [2024-11-26 01:07:05.166147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.166152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.166201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.166208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.500 [2024-11-26 01:07:05.166214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.166224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.166248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.166255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:42.500 [2024-11-26 01:07:05.166264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.166271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.166297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.166304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.500 [2024-11-26 01:07:05.166310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.166316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.166350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.500 [2024-11-26 01:07:05.166359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.500 [2024-11-26 01:07:05.166366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.500 [2024-11-26 01:07:05.166371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.500 [2024-11-26 01:07:05.166460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.337 ms, result 0 00:21:42.500 00:21:42.500 00:21:42.500 01:07:05 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:42.761 [2024-11-26 01:07:05.421189] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:21:42.761 [2024-11-26 01:07:05.421307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91004 ] 00:21:42.761 [2024-11-26 01:07:05.553597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:42.761 [2024-11-26 01:07:05.583231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.761 [2024-11-26 01:07:05.607046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.023 [2024-11-26 01:07:05.718282] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.023 [2024-11-26 01:07:05.718362] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.023 [2024-11-26 01:07:05.879112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.023 [2024-11-26 01:07:05.879178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:43.023 [2024-11-26 01:07:05.879193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:43.023 [2024-11-26 01:07:05.879202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.023 [2024-11-26 01:07:05.879259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.023 [2024-11-26 01:07:05.879274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.023 [2024-11-26 01:07:05.879283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:43.023 [2024-11-26 01:07:05.879294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.023 [2024-11-26 01:07:05.879314] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:43.023 [2024-11-26 01:07:05.879700] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:43.023 [2024-11-26 01:07:05.879745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.023 [2024-11-26 01:07:05.879757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.023 [2024-11-26 01:07:05.879767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:21:43.023 [2024-11-26 01:07:05.879775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.023 [2024-11-26 01:07:05.881488] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:43.024 [2024-11-26 01:07:05.885315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.024 [2024-11-26 01:07:05.885368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:43.024 [2024-11-26 01:07:05.885385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.829 ms 00:21:43.024 [2024-11-26 01:07:05.885396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.024 [2024-11-26 01:07:05.885479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.024 [2024-11-26 01:07:05.885490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:43.024 [2024-11-26 01:07:05.885500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:43.024 [2024-11-26 01:07:05.885507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.024 [2024-11-26 01:07:05.893954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.024 [2024-11-26 01:07:05.893999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.024 [2024-11-26 01:07:05.894020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.402 ms 00:21:43.024 [2024-11-26 01:07:05.894027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.024 [2024-11-26 01:07:05.894140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.024 [2024-11-26 01:07:05.894155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.024 [2024-11-26 01:07:05.894167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:43.024 [2024-11-26 01:07:05.894180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.024 [2024-11-26 01:07:05.894239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.024 [2024-11-26 01:07:05.894249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:43.024 [2024-11-26 01:07:05.894261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:43.024 [2024-11-26 01:07:05.894269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.024 [2024-11-26 01:07:05.894292] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:43.024 [2024-11-26 01:07:05.896347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.024 [2024-11-26 01:07:05.896384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.024 [2024-11-26 01:07:05.896403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.061 ms 00:21:43.024 [2024-11-26 01:07:05.896411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.024 [2024-11-26 01:07:05.896443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.024 [2024-11-26 01:07:05.896455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:43.024 [2024-11-26 01:07:05.896471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:43.024 [2024-11-26 01:07:05.896482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.024 [2024-11-26 01:07:05.896504] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:43.024 [2024-11-26 01:07:05.896526] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:43.024 [2024-11-26 01:07:05.896570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:43.024 [2024-11-26 01:07:05.896590] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:43.024 [2024-11-26 01:07:05.896695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:43.024 [2024-11-26 01:07:05.896709] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:43.024 [2024-11-26 01:07:05.896723] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:43.024 [2024-11-26 01:07:05.896734] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:43.024 [2024-11-26 01:07:05.896742] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:43.024 [2024-11-26 01:07:05.896755] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:43.024 [2024-11-26 01:07:05.896763] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:43.024 [2024-11-26 01:07:05.896770] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:43.024 [2024-11-26 01:07:05.896777] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:43.024 [2024-11-26 01:07:05.896789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.024 [2024-11-26 01:07:05.896797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:43.024 [2024-11-26 01:07:05.896813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:21:43.024 [2024-11-26 01:07:05.896820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.024 [2024-11-26 01:07:05.896921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.024 [2024-11-26 01:07:05.896931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:43.024 [2024-11-26 01:07:05.896939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:43.024 [2024-11-26 01:07:05.896946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.024 [2024-11-26 01:07:05.897044] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:43.024 [2024-11-26 01:07:05.897077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:43.024 [2024-11-26 01:07:05.897087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.024 [2024-11-26 01:07:05.897102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:43.024 [2024-11-26 01:07:05.897122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:43.024 [2024-11-26 01:07:05.897146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:43.024 [2024-11-26 01:07:05.897154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.024 [2024-11-26 01:07:05.897173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:43.024 [2024-11-26 01:07:05.897181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:43.024 [2024-11-26 01:07:05.897189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.024 [2024-11-26 01:07:05.897197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:43.024 [2024-11-26 01:07:05.897206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:43.024 [2024-11-26 01:07:05.897215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:43.024 [2024-11-26 01:07:05.897231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:43.024 [2024-11-26 01:07:05.897238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:43.024 [2024-11-26 01:07:05.897254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.024 [2024-11-26 01:07:05.897270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:43.024 [2024-11-26 01:07:05.897278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.024 [2024-11-26 01:07:05.897299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:43.024 [2024-11-26 01:07:05.897306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.024 [2024-11-26 01:07:05.897323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:43.024 [2024-11-26 01:07:05.897331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.024 [2024-11-26 01:07:05.897347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:43.024 [2024-11-26 01:07:05.897355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.024 [2024-11-26 01:07:05.897370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:43.024 [2024-11-26 01:07:05.897378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:43.024 [2024-11-26 01:07:05.897386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.024 [2024-11-26 01:07:05.897394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:43.024 [2024-11-26 01:07:05.897401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:43.024 [2024-11-26 01:07:05.897409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:43.024 [2024-11-26 01:07:05.897426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:43.024 [2024-11-26 01:07:05.897435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897442] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:43.024 [2024-11-26 01:07:05.897450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:43.024 [2024-11-26 01:07:05.897459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.024 [2024-11-26 01:07:05.897470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.024 [2024-11-26 01:07:05.897480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:43.024 [2024-11-26 01:07:05.897488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:43.024 [2024-11-26 01:07:05.897498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:43.024 [2024-11-26 01:07:05.897506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:43.024 [2024-11-26 01:07:05.897514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:43.024 [2024-11-26 01:07:05.897522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:43.025 [2024-11-26 01:07:05.897532] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:43.025 [2024-11-26 01:07:05.897542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.025 [2024-11-26 01:07:05.897551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:43.025 [2024-11-26 01:07:05.897559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:43.025 [2024-11-26 01:07:05.897569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:43.025 [2024-11-26 01:07:05.897577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:43.025 [2024-11-26 01:07:05.897585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:43.025 [2024-11-26 01:07:05.897592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:43.025 [2024-11-26 01:07:05.897601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:43.025 [2024-11-26 01:07:05.897609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:43.025 [2024-11-26 01:07:05.897616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:43.025 [2024-11-26 01:07:05.897624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:43.025 [2024-11-26 01:07:05.897631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:43.025 [2024-11-26 01:07:05.897637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:43.025 [2024-11-26 01:07:05.897644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:43.025 [2024-11-26 01:07:05.897652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:43.025 [2024-11-26 01:07:05.897660] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:43.025 [2024-11-26 01:07:05.897667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.025 [2024-11-26 01:07:05.897675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:43.025 [2024-11-26 01:07:05.897682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:43.025 [2024-11-26 01:07:05.897692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:43.025 [2024-11-26 01:07:05.897699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:43.025 [2024-11-26 01:07:05.897707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.025 [2024-11-26 01:07:05.897714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:43.025 [2024-11-26 01:07:05.897724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:21:43.025 [2024-11-26 01:07:05.897733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.025 [2024-11-26 01:07:05.911524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.025 [2024-11-26 01:07:05.911578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.025 [2024-11-26 01:07:05.911590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.746 ms 00:21:43.025 [2024-11-26 01:07:05.911599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.025 [2024-11-26 01:07:05.911685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.025 [2024-11-26 01:07:05.911695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:43.025 [2024-11-26 01:07:05.911705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:43.025 [2024-11-26 01:07:05.911718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.025 [2024-11-26 01:07:05.932822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.025 [2024-11-26 01:07:05.932912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.025 [2024-11-26 01:07:05.932929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.046 ms 00:21:43.025 [2024-11-26 01:07:05.932940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.025 [2024-11-26 01:07:05.932994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.025 [2024-11-26 01:07:05.933007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.025 [2024-11-26 01:07:05.933028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:43.025 [2024-11-26 01:07:05.933040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.025 [2024-11-26 01:07:05.933613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.025 [2024-11-26 01:07:05.933660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.025 [2024-11-26 01:07:05.933673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:21:43.025 [2024-11-26 01:07:05.933684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.025 [2024-11-26 01:07:05.933887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.025 [2024-11-26 01:07:05.933907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.025 [2024-11-26 01:07:05.933918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:21:43.025 [2024-11-26 01:07:05.933936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.287 [2024-11-26 01:07:05.941912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.287 [2024-11-26 01:07:05.941954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.287 [2024-11-26 01:07:05.941965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.952 ms 00:21:43.287 [2024-11-26 01:07:05.941984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.287 [2024-11-26 01:07:05.945821] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:43.287 [2024-11-26 01:07:05.945906] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:43.287 [2024-11-26 01:07:05.945919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:05.945928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:43.288 [2024-11-26 01:07:05.945936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.836 ms 00:21:43.288 [2024-11-26 01:07:05.945943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:05.961472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:05.961525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:43.288 [2024-11-26 01:07:05.961537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.476 ms 00:21:43.288 [2024-11-26 01:07:05.961545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:05.964502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:05.964552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:43.288 [2024-11-26 01:07:05.964562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.903 ms 00:21:43.288 [2024-11-26 01:07:05.964569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:05.967378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:05.967426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:43.288 [2024-11-26 01:07:05.967437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:21:43.288 [2024-11-26 01:07:05.967455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:05.967800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:05.967824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:43.288 [2024-11-26 01:07:05.967857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:21:43.288 [2024-11-26 01:07:05.967869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:05.991235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:05.991312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:43.288 [2024-11-26 01:07:05.991325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.346 ms 00:21:43.288 [2024-11-26 01:07:05.991334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:05.999356] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:43.288 [2024-11-26 01:07:06.002521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:06.002567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:43.288 [2024-11-26 01:07:06.002586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.134 ms 00:21:43.288 [2024-11-26 01:07:06.002595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:06.002672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:06.002690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:43.288 [2024-11-26 01:07:06.002700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:43.288 [2024-11-26 01:07:06.002708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:06.002794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:06.002805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:43.288 [2024-11-26 01:07:06.002814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:43.288 [2024-11-26 01:07:06.002822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:06.002863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:06.002873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:43.288 [2024-11-26 01:07:06.002882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:43.288 [2024-11-26 01:07:06.002892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:06.002928] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:43.288 [2024-11-26 01:07:06.002939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:06.002950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:43.288 [2024-11-26 01:07:06.002959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:43.288 [2024-11-26 01:07:06.002967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:06.008467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:06.008529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:43.288 [2024-11-26 01:07:06.008541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.481 ms 00:21:43.288 [2024-11-26 01:07:06.008550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:06.008636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.288 [2024-11-26 01:07:06.008653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:43.288 [2024-11-26 01:07:06.008667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:43.288 [2024-11-26 01:07:06.008676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.288 [2024-11-26 01:07:06.009810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.273 ms, result 0 00:21:44.677  [2024-11-26T01:07:08.539Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-26T01:07:09.486Z] Copying: 26/1024 [MB] (10 MBps) [2024-11-26T01:07:10.518Z] Copying: 43/1024 [MB] (17 MBps) [2024-11-26T01:07:11.462Z] Copying: 55/1024 [MB] (12 MBps) [2024-11-26T01:07:12.401Z] Copying: 71/1024 [MB] (15 MBps) [2024-11-26T01:07:13.343Z] Copying: 90/1024 [MB] (19 MBps) [2024-11-26T01:07:14.285Z] Copying: 105/1024 [MB] (14 MBps) [2024-11-26T01:07:15.229Z] Copying: 120/1024 [MB] (15 MBps) [2024-11-26T01:07:16.613Z] Copying: 140/1024 [MB] (20 MBps) [2024-11-26T01:07:17.555Z] Copying: 157/1024 [MB] (16 MBps) [2024-11-26T01:07:18.502Z] Copying: 175/1024 [MB] (18 MBps) [2024-11-26T01:07:19.447Z] Copying: 189/1024 [MB] (13 MBps) [2024-11-26T01:07:20.393Z] Copying: 200/1024 [MB] (10 MBps) [2024-11-26T01:07:21.340Z] Copying: 211/1024 [MB] (10 MBps) [2024-11-26T01:07:22.284Z] Copying: 221/1024 [MB] (10 MBps) [2024-11-26T01:07:23.229Z] Copying: 232/1024 [MB] (10 MBps) [2024-11-26T01:07:24.619Z] Copying: 242/1024 [MB] (10 MBps) [2024-11-26T01:07:25.192Z] Copying: 253/1024 [MB] (10 MBps) [2024-11-26T01:07:26.578Z] Copying: 264/1024 [MB] (11 MBps) [2024-11-26T01:07:27.525Z] Copying: 274/1024 [MB] (10 MBps) [2024-11-26T01:07:28.469Z] Copying: 285/1024 [MB] (10 MBps) [2024-11-26T01:07:29.412Z] Copying: 296/1024 [MB] (11 MBps) [2024-11-26T01:07:30.357Z] Copying: 312/1024 [MB] (15 MBps) [2024-11-26T01:07:31.297Z] Copying: 323/1024 [MB] (10 MBps) [2024-11-26T01:07:32.240Z] Copying: 334/1024 [MB] (11 MBps) [2024-11-26T01:07:33.627Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-26T01:07:34.201Z] Copying: 357/1024 [MB] (11 MBps) [2024-11-26T01:07:35.591Z] Copying: 378/1024 [MB] (20 MBps) [2024-11-26T01:07:36.536Z] Copying: 392/1024 [MB] (14 MBps) [2024-11-26T01:07:37.480Z] Copying: 408/1024 [MB] (16 MBps) [2024-11-26T01:07:38.424Z] Copying: 424/1024 [MB] (15 MBps) [2024-11-26T01:07:39.366Z] Copying: 440/1024 [MB] (16 MBps) [2024-11-26T01:07:40.307Z] Copying: 452/1024 [MB] (12 MBps) [2024-11-26T01:07:41.248Z] Copying: 472/1024 [MB] (19 MBps) [2024-11-26T01:07:42.193Z] Copying: 488/1024 [MB] (16 MBps) [2024-11-26T01:07:43.579Z] Copying: 505/1024 [MB] (17 MBps) [2024-11-26T01:07:44.216Z] Copying: 520/1024 [MB] (14 MBps) [2024-11-26T01:07:45.618Z] Copying: 531/1024 [MB] (10 MBps) [2024-11-26T01:07:46.558Z] Copying: 542/1024 [MB] (10 MBps) [2024-11-26T01:07:47.502Z] Copying: 555/1024 [MB] (12 MBps) [2024-11-26T01:07:48.445Z] Copying: 568/1024 [MB] (13 MBps) [2024-11-26T01:07:49.400Z] Copying: 580/1024 [MB] (11 MBps) [2024-11-26T01:07:50.343Z] Copying: 597/1024 [MB] (16 MBps) [2024-11-26T01:07:51.284Z] Copying: 611/1024 [MB] (14 MBps) [2024-11-26T01:07:52.226Z] Copying: 625/1024 [MB] (14 MBps) [2024-11-26T01:07:53.612Z] Copying: 641/1024 [MB] (15 MBps) [2024-11-26T01:07:54.556Z] Copying: 653/1024 [MB] (12 MBps) [2024-11-26T01:07:55.493Z] Copying: 672/1024 [MB] (18 MBps) [2024-11-26T01:07:56.430Z] Copying: 688/1024 [MB] (15 MBps) [2024-11-26T01:07:57.369Z] Copying: 701/1024 [MB] (12 MBps) [2024-11-26T01:07:58.310Z] Copying: 711/1024 [MB] (10 MBps) [2024-11-26T01:07:59.250Z] Copying: 722/1024 [MB] (10 MBps) [2024-11-26T01:08:00.626Z] Copying: 732/1024 [MB] (10 MBps) [2024-11-26T01:08:01.191Z] Copying: 743/1024 [MB] (10 MBps) [2024-11-26T01:08:02.562Z] Copying: 762/1024 [MB] (18 MBps) [2024-11-26T01:08:03.506Z] Copying: 787/1024 [MB] (25 MBps) [2024-11-26T01:08:04.450Z] Copying: 799/1024 [MB] (11 MBps) [2024-11-26T01:08:05.389Z] Copying: 810/1024 [MB] (11 MBps) [2024-11-26T01:08:06.334Z] Copying: 837/1024 [MB] (27 MBps) [2024-11-26T01:08:07.278Z] Copying: 853/1024 [MB] (15 MBps) [2024-11-26T01:08:08.223Z] Copying: 868/1024 [MB] (14 MBps) [2024-11-26T01:08:09.613Z] Copying: 880/1024 [MB] (12 MBps) [2024-11-26T01:08:10.557Z] Copying: 893/1024 [MB] (13 MBps) [2024-11-26T01:08:11.502Z] Copying: 910/1024 [MB] (16 MBps) [2024-11-26T01:08:12.446Z] Copying: 922/1024 [MB] (11 MBps) [2024-11-26T01:08:13.388Z] Copying: 936/1024 [MB] (13 MBps) [2024-11-26T01:08:14.330Z] Copying: 951/1024 [MB] (15 MBps) [2024-11-26T01:08:15.272Z] Copying: 963/1024 [MB] (12 MBps) [2024-11-26T01:08:16.216Z] Copying: 980/1024 [MB] (17 MBps) [2024-11-26T01:08:17.600Z] Copying: 992/1024 [MB] (11 MBps) [2024-11-26T01:08:18.170Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-26T01:08:18.170Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-26 01:08:18.018936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.019005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:55.253 [2024-11-26 01:08:18.019021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:55.253 [2024-11-26 01:08:18.019032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.019057] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:55.253 [2024-11-26 01:08:18.019549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.019576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:55.253 [2024-11-26 01:08:18.019588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:22:55.253 [2024-11-26 01:08:18.019597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.019876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.019891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:55.253 [2024-11-26 01:08:18.019906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:22:55.253 [2024-11-26 01:08:18.019921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.024636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.024661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:55.253 [2024-11-26 01:08:18.024675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.693 ms 00:22:55.253 [2024-11-26 01:08:18.024685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.034375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.034400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:55.253 [2024-11-26 01:08:18.034413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.670 ms 00:22:55.253 [2024-11-26 01:08:18.034420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.035903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.035930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:55.253 [2024-11-26 01:08:18.035937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:22:55.253 [2024-11-26 01:08:18.035943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.039254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.039281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:55.253 [2024-11-26 01:08:18.039288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.297 ms 00:22:55.253 [2024-11-26 01:08:18.039293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.039377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.039391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:55.253 [2024-11-26 01:08:18.039400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:55.253 [2024-11-26 01:08:18.039408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.041102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.041136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:55.253 [2024-11-26 01:08:18.041142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:22:55.253 [2024-11-26 01:08:18.041148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.042511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.042538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:55.253 [2024-11-26 01:08:18.042545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:22:55.253 [2024-11-26 01:08:18.042550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.043713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.043740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:55.253 [2024-11-26 01:08:18.043747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:22:55.253 [2024-11-26 01:08:18.043753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.044681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.253 [2024-11-26 01:08:18.044708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:55.253 [2024-11-26 01:08:18.044715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:22:55.253 [2024-11-26 01:08:18.044721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.253 [2024-11-26 01:08:18.044734] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:55.253 [2024-11-26 01:08:18.044745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:55.253 [2024-11-26 01:08:18.044903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.044999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:55.254 [2024-11-26 01:08:18.045314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:55.255 [2024-11-26 01:08:18.045319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:55.255 [2024-11-26 01:08:18.045327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:55.255 [2024-11-26 01:08:18.045332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:55.255 [2024-11-26 01:08:18.045338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:55.255 [2024-11-26 01:08:18.045351] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:55.255 [2024-11-26 01:08:18.045357] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24fe6672-7f6f-41a2-a551-9dd1d146a529 00:22:55.255 [2024-11-26 01:08:18.045363] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:55.255 [2024-11-26 01:08:18.045369] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:55.255 [2024-11-26 01:08:18.045374] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:55.255 [2024-11-26 01:08:18.045380] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:55.255 [2024-11-26 01:08:18.045385] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:55.255 [2024-11-26 01:08:18.045398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:55.255 [2024-11-26 01:08:18.045404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:55.255 [2024-11-26 01:08:18.045409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:55.255 [2024-11-26 01:08:18.045414] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:55.255 [2024-11-26 01:08:18.045420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.255 [2024-11-26 01:08:18.045430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:55.255 [2024-11-26 01:08:18.045436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:22:55.255 [2024-11-26 01:08:18.045442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.046797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.255 [2024-11-26 01:08:18.046820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:55.255 [2024-11-26 01:08:18.046828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:22:55.255 [2024-11-26 01:08:18.046836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.046918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.255 [2024-11-26 01:08:18.046928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:55.255 [2024-11-26 01:08:18.046934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:55.255 [2024-11-26 01:08:18.046939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.051230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.051254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:55.255 [2024-11-26 01:08:18.051264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.051273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.051316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.051323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:55.255 [2024-11-26 01:08:18.051330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.051336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.051363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.051370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:55.255 [2024-11-26 01:08:18.051376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.051383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.051394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.051400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:55.255 [2024-11-26 01:08:18.051406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.051411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.059142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.059169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.255 [2024-11-26 01:08:18.059177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.059187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.065350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.065377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.255 [2024-11-26 01:08:18.065385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.065391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.065428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.065434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.255 [2024-11-26 01:08:18.065441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.065446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.065468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.065475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.255 [2024-11-26 01:08:18.065480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.065487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.065540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.065552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.255 [2024-11-26 01:08:18.065558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.065563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.065584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.065592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:55.255 [2024-11-26 01:08:18.065601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.065606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.065635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.065643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.255 [2024-11-26 01:08:18.065648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.065654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.065683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.255 [2024-11-26 01:08:18.065690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.255 [2024-11-26 01:08:18.065696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.255 [2024-11-26 01:08:18.065701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.255 [2024-11-26 01:08:18.065790] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.845 ms, result 0 00:22:55.519 00:22:55.519 00:22:55.520 01:08:18 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:58.162 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:58.162 01:08:20 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:58.162 [2024-11-26 01:08:20.585005] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:22:58.162 [2024-11-26 01:08:20.585146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91781 ] 00:22:58.162 [2024-11-26 01:08:20.721987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:58.162 [2024-11-26 01:08:20.748121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.162 [2024-11-26 01:08:20.779095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.162 [2024-11-26 01:08:20.894364] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.162 [2024-11-26 01:08:20.894438] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.162 [2024-11-26 01:08:21.056001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.056057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:58.162 [2024-11-26 01:08:21.056073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:58.162 [2024-11-26 01:08:21.056082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.056139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.056149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:58.162 [2024-11-26 01:08:21.056158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:58.162 [2024-11-26 01:08:21.056172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.056192] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:58.162 [2024-11-26 01:08:21.056450] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:58.162 [2024-11-26 01:08:21.056466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.056478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:58.162 [2024-11-26 01:08:21.056487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:22:58.162 [2024-11-26 01:08:21.056495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.058230] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:58.162 [2024-11-26 01:08:21.062169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.062215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:58.162 [2024-11-26 01:08:21.062234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.941 ms 00:22:58.162 [2024-11-26 01:08:21.062245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.062325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.062336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:58.162 [2024-11-26 01:08:21.062350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:58.162 [2024-11-26 01:08:21.062362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.070577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.070621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.162 [2024-11-26 01:08:21.070632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.164 ms 00:22:58.162 [2024-11-26 01:08:21.070641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.070741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.070750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:58.162 [2024-11-26 01:08:21.070762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:22:58.162 [2024-11-26 01:08:21.070770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.070830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.070859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:58.162 [2024-11-26 01:08:21.070868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:58.162 [2024-11-26 01:08:21.070879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.070910] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:58.162 [2024-11-26 01:08:21.072981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.073012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:58.162 [2024-11-26 01:08:21.073023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:22:58.162 [2024-11-26 01:08:21.073032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.073068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.162 [2024-11-26 01:08:21.073077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:58.162 [2024-11-26 01:08:21.073096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:58.162 [2024-11-26 01:08:21.073107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.162 [2024-11-26 01:08:21.073130] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:58.163 [2024-11-26 01:08:21.073151] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:58.163 [2024-11-26 01:08:21.073188] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:58.163 [2024-11-26 01:08:21.073207] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:58.163 [2024-11-26 01:08:21.073314] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:58.163 [2024-11-26 01:08:21.073332] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:58.163 [2024-11-26 01:08:21.073344] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:58.163 [2024-11-26 01:08:21.073354] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:58.163 [2024-11-26 01:08:21.073363] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:58.163 [2024-11-26 01:08:21.073371] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:58.163 [2024-11-26 01:08:21.073383] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:58.163 [2024-11-26 01:08:21.073394] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:58.163 [2024-11-26 01:08:21.073402] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:58.163 [2024-11-26 01:08:21.073410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.163 [2024-11-26 01:08:21.073418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:58.163 [2024-11-26 01:08:21.073428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:22:58.163 [2024-11-26 01:08:21.073437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.163 [2024-11-26 01:08:21.073520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.163 [2024-11-26 01:08:21.073528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:58.163 [2024-11-26 01:08:21.073536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:58.163 [2024-11-26 01:08:21.073543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.163 [2024-11-26 01:08:21.073643] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:58.163 [2024-11-26 01:08:21.073659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:58.163 [2024-11-26 01:08:21.073667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.163 [2024-11-26 01:08:21.073676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:58.163 [2024-11-26 01:08:21.073693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:58.163 [2024-11-26 01:08:21.073717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:58.163 [2024-11-26 01:08:21.073724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.163 [2024-11-26 01:08:21.073741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:58.163 [2024-11-26 01:08:21.073752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:58.163 [2024-11-26 01:08:21.073759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.163 [2024-11-26 01:08:21.073766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:58.163 [2024-11-26 01:08:21.073773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:58.163 [2024-11-26 01:08:21.073781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:58.163 [2024-11-26 01:08:21.073796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:58.163 [2024-11-26 01:08:21.073803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:58.163 [2024-11-26 01:08:21.073817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.163 [2024-11-26 01:08:21.073831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:58.163 [2024-11-26 01:08:21.073855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.163 [2024-11-26 01:08:21.073875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:58.163 [2024-11-26 01:08:21.073882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.163 [2024-11-26 01:08:21.073896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:58.163 [2024-11-26 01:08:21.073903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.163 [2024-11-26 01:08:21.073917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:58.163 [2024-11-26 01:08:21.073924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.163 [2024-11-26 01:08:21.073939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:58.163 [2024-11-26 01:08:21.073947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:58.163 [2024-11-26 01:08:21.073953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.163 [2024-11-26 01:08:21.073960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:58.163 [2024-11-26 01:08:21.073967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:58.163 [2024-11-26 01:08:21.073974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.163 [2024-11-26 01:08:21.073981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:58.163 [2024-11-26 01:08:21.073990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:58.163 [2024-11-26 01:08:21.073997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.163 [2024-11-26 01:08:21.074006] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:58.163 [2024-11-26 01:08:21.074018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:58.163 [2024-11-26 01:08:21.074027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.163 [2024-11-26 01:08:21.074061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.163 [2024-11-26 01:08:21.074070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:58.163 [2024-11-26 01:08:21.074078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:58.163 [2024-11-26 01:08:21.074086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:58.163 [2024-11-26 01:08:21.074093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:58.163 [2024-11-26 01:08:21.074100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:58.163 [2024-11-26 01:08:21.074107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:58.163 [2024-11-26 01:08:21.074115] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:58.163 [2024-11-26 01:08:21.074125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.163 [2024-11-26 01:08:21.074135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:58.163 [2024-11-26 01:08:21.074142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:58.163 [2024-11-26 01:08:21.074152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:58.163 [2024-11-26 01:08:21.074160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:58.163 [2024-11-26 01:08:21.074167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:58.163 [2024-11-26 01:08:21.074175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:58.163 [2024-11-26 01:08:21.074182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:58.163 [2024-11-26 01:08:21.074189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:58.163 [2024-11-26 01:08:21.074197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:58.163 [2024-11-26 01:08:21.074204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:58.163 [2024-11-26 01:08:21.074210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:58.163 [2024-11-26 01:08:21.074218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:58.163 [2024-11-26 01:08:21.074225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:58.163 [2024-11-26 01:08:21.074231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:58.163 [2024-11-26 01:08:21.074239] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:58.163 [2024-11-26 01:08:21.074248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.163 [2024-11-26 01:08:21.074261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:58.163 [2024-11-26 01:08:21.074269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:58.163 [2024-11-26 01:08:21.074279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:58.163 [2024-11-26 01:08:21.074287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:58.163 [2024-11-26 01:08:21.074296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.163 [2024-11-26 01:08:21.074304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:58.163 [2024-11-26 01:08:21.074315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:22:58.164 [2024-11-26 01:08:21.074327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.088676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.088719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.426 [2024-11-26 01:08:21.088732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.303 ms 00:22:58.426 [2024-11-26 01:08:21.088742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.088834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.088859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:58.426 [2024-11-26 01:08:21.088869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:58.426 [2024-11-26 01:08:21.088878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.110279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.110336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.426 [2024-11-26 01:08:21.110354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.328 ms 00:22:58.426 [2024-11-26 01:08:21.110366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.110427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.110443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.426 [2024-11-26 01:08:21.110466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:58.426 [2024-11-26 01:08:21.110482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.111116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.111160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.426 [2024-11-26 01:08:21.111191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:22:58.426 [2024-11-26 01:08:21.111202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.111402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.111416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.426 [2024-11-26 01:08:21.111429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:22:58.426 [2024-11-26 01:08:21.111440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.119897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.119960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.426 [2024-11-26 01:08:21.119975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.423 ms 00:22:58.426 [2024-11-26 01:08:21.119995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.124091] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:58.426 [2024-11-26 01:08:21.124142] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:58.426 [2024-11-26 01:08:21.124158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.124166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:58.426 [2024-11-26 01:08:21.124176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.037 ms 00:22:58.426 [2024-11-26 01:08:21.124183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.140124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.140192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:58.426 [2024-11-26 01:08:21.140204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.884 ms 00:22:58.426 [2024-11-26 01:08:21.140223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.143241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.143290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:58.426 [2024-11-26 01:08:21.143301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.961 ms 00:22:58.426 [2024-11-26 01:08:21.143309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.146070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.146117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:58.426 [2024-11-26 01:08:21.146127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:22:58.426 [2024-11-26 01:08:21.146145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.146490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.146511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:58.426 [2024-11-26 01:08:21.146521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:22:58.426 [2024-11-26 01:08:21.146534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.169290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.169515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:58.426 [2024-11-26 01:08:21.169538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.734 ms 00:22:58.426 [2024-11-26 01:08:21.169547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.177633] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:58.426 [2024-11-26 01:08:21.180523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.180671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.426 [2024-11-26 01:08:21.180697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.933 ms 00:22:58.426 [2024-11-26 01:08:21.180706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.180783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.180794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:58.426 [2024-11-26 01:08:21.180804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:58.426 [2024-11-26 01:08:21.180812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.180902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.180918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.426 [2024-11-26 01:08:21.180927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:58.426 [2024-11-26 01:08:21.180941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.180965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.180974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.426 [2024-11-26 01:08:21.180983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:58.426 [2024-11-26 01:08:21.180993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.181033] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:58.426 [2024-11-26 01:08:21.181044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.181052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:58.426 [2024-11-26 01:08:21.181063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:58.426 [2024-11-26 01:08:21.181071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.185781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.185830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.426 [2024-11-26 01:08:21.185855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.688 ms 00:22:58.426 [2024-11-26 01:08:21.185865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.185948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.426 [2024-11-26 01:08:21.185962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.426 [2024-11-26 01:08:21.185971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:58.426 [2024-11-26 01:08:21.185988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.426 [2024-11-26 01:08:21.187079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 130.612 ms, result 0 00:22:59.369  [2024-11-26T01:08:23.231Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-26T01:08:24.614Z] Copying: 31/1024 [MB] (13 MBps) [2024-11-26T01:08:25.560Z] Copying: 43/1024 [MB] (11 MBps) [2024-11-26T01:08:26.504Z] Copying: 60/1024 [MB] (17 MBps) [2024-11-26T01:08:27.449Z] Copying: 75/1024 [MB] (14 MBps) [2024-11-26T01:08:28.392Z] Copying: 95/1024 [MB] (20 MBps) [2024-11-26T01:08:29.336Z] Copying: 112/1024 [MB] (16 MBps) [2024-11-26T01:08:30.281Z] Copying: 127/1024 [MB] (14 MBps) [2024-11-26T01:08:31.226Z] Copying: 143/1024 [MB] (16 MBps) [2024-11-26T01:08:32.616Z] Copying: 160/1024 [MB] (16 MBps) [2024-11-26T01:08:33.561Z] Copying: 174/1024 [MB] (13 MBps) [2024-11-26T01:08:34.507Z] Copying: 184/1024 [MB] (10 MBps) [2024-11-26T01:08:35.453Z] Copying: 194/1024 [MB] (10 MBps) [2024-11-26T01:08:36.399Z] Copying: 204/1024 [MB] (10 MBps) [2024-11-26T01:08:37.344Z] Copying: 216/1024 [MB] (11 MBps) [2024-11-26T01:08:38.290Z] Copying: 226/1024 [MB] (10 MBps) [2024-11-26T01:08:39.238Z] Copying: 236/1024 [MB] (10 MBps) [2024-11-26T01:08:40.628Z] Copying: 247/1024 [MB] (10 MBps) [2024-11-26T01:08:41.202Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-26T01:08:42.591Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-26T01:08:43.533Z] Copying: 280/1024 [MB] (12 MBps) [2024-11-26T01:08:44.478Z] Copying: 296/1024 [MB] (15 MBps) [2024-11-26T01:08:45.424Z] Copying: 309/1024 [MB] (13 MBps) [2024-11-26T01:08:46.370Z] Copying: 319/1024 [MB] (10 MBps) [2024-11-26T01:08:47.316Z] Copying: 330/1024 [MB] (10 MBps) [2024-11-26T01:08:48.262Z] Copying: 340/1024 [MB] (10 MBps) [2024-11-26T01:08:49.207Z] Copying: 350/1024 [MB] (10 MBps) [2024-11-26T01:08:50.593Z] Copying: 360/1024 [MB] (10 MBps) [2024-11-26T01:08:51.534Z] Copying: 370/1024 [MB] (10 MBps) [2024-11-26T01:08:52.476Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-26T01:08:53.422Z] Copying: 391/1024 [MB] (10 MBps) [2024-11-26T01:08:54.424Z] Copying: 401/1024 [MB] (10 MBps) [2024-11-26T01:08:55.368Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-26T01:08:56.309Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-26T01:08:57.266Z] Copying: 443668/1048576 [kB] (10104 kBps) [2024-11-26T01:08:58.210Z] Copying: 453780/1048576 [kB] (10112 kBps) [2024-11-26T01:08:59.600Z] Copying: 464012/1048576 [kB] (10232 kBps) [2024-11-26T01:09:00.544Z] Copying: 463/1024 [MB] (10 MBps) [2024-11-26T01:09:01.490Z] Copying: 484544/1048576 [kB] (10220 kBps) [2024-11-26T01:09:02.434Z] Copying: 483/1024 [MB] (10 MBps) [2024-11-26T01:09:03.380Z] Copying: 493/1024 [MB] (10 MBps) [2024-11-26T01:09:04.325Z] Copying: 504/1024 [MB] (10 MBps) [2024-11-26T01:09:05.270Z] Copying: 515/1024 [MB] (10 MBps) [2024-11-26T01:09:06.209Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-26T01:09:07.593Z] Copying: 561/1024 [MB] (36 MBps) [2024-11-26T01:09:08.539Z] Copying: 586/1024 [MB] (24 MBps) [2024-11-26T01:09:09.482Z] Copying: 596/1024 [MB] (10 MBps) [2024-11-26T01:09:10.423Z] Copying: 608/1024 [MB] (12 MBps) [2024-11-26T01:09:11.365Z] Copying: 625/1024 [MB] (17 MBps) [2024-11-26T01:09:12.307Z] Copying: 645/1024 [MB] (19 MBps) [2024-11-26T01:09:13.253Z] Copying: 664/1024 [MB] (19 MBps) [2024-11-26T01:09:14.203Z] Copying: 681/1024 [MB] (17 MBps) [2024-11-26T01:09:15.587Z] Copying: 700/1024 [MB] (18 MBps) [2024-11-26T01:09:16.533Z] Copying: 718/1024 [MB] (18 MBps) [2024-11-26T01:09:17.482Z] Copying: 734/1024 [MB] (16 MBps) [2024-11-26T01:09:18.428Z] Copying: 749/1024 [MB] (14 MBps) [2024-11-26T01:09:19.373Z] Copying: 765/1024 [MB] (16 MBps) [2024-11-26T01:09:20.313Z] Copying: 781/1024 [MB] (15 MBps) [2024-11-26T01:09:21.258Z] Copying: 801/1024 [MB] (20 MBps) [2024-11-26T01:09:22.205Z] Copying: 816/1024 [MB] (15 MBps) [2024-11-26T01:09:23.594Z] Copying: 838/1024 [MB] (21 MBps) [2024-11-26T01:09:24.565Z] Copying: 857/1024 [MB] (18 MBps) [2024-11-26T01:09:25.543Z] Copying: 872/1024 [MB] (15 MBps) [2024-11-26T01:09:26.485Z] Copying: 889/1024 [MB] (16 MBps) [2024-11-26T01:09:27.420Z] Copying: 901/1024 [MB] (11 MBps) [2024-11-26T01:09:28.380Z] Copying: 932/1024 [MB] (31 MBps) [2024-11-26T01:09:29.321Z] Copying: 962/1024 [MB] (30 MBps) [2024-11-26T01:09:30.263Z] Copying: 979/1024 [MB] (17 MBps) [2024-11-26T01:09:31.642Z] Copying: 998/1024 [MB] (19 MBps) [2024-11-26T01:09:32.213Z] Copying: 1023/1024 [MB] (24 MBps) [2024-11-26T01:09:32.213Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-26 01:09:31.986031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:31.986122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:09.296 [2024-11-26 01:09:31.986140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:09.296 [2024-11-26 01:09:31.986157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:31.989534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:09.296 [2024-11-26 01:09:31.991108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:31.991152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:09.296 [2024-11-26 01:09:31.991174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.390 ms 00:24:09.296 [2024-11-26 01:09:31.991183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.005432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.005490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:09.296 [2024-11-26 01:09:32.005505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.016 ms 00:24:09.296 [2024-11-26 01:09:32.005513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.030095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.030153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:09.296 [2024-11-26 01:09:32.030166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.561 ms 00:24:09.296 [2024-11-26 01:09:32.030178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.036368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.036543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:09.296 [2024-11-26 01:09:32.036565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.151 ms 00:24:09.296 [2024-11-26 01:09:32.036574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.039324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.039371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:09.296 [2024-11-26 01:09:32.039382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:24:09.296 [2024-11-26 01:09:32.039390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.044030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.044077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:09.296 [2024-11-26 01:09:32.044088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.600 ms 00:24:09.296 [2024-11-26 01:09:32.044108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.183786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.183997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:09.296 [2024-11-26 01:09:32.184022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 139.634 ms 00:24:09.296 [2024-11-26 01:09:32.184033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.186426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.186487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:09.296 [2024-11-26 01:09:32.186498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.369 ms 00:24:09.296 [2024-11-26 01:09:32.186505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.188434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.188478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:09.296 [2024-11-26 01:09:32.188488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.890 ms 00:24:09.296 [2024-11-26 01:09:32.188495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.190128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.190171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:09.296 [2024-11-26 01:09:32.190181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.595 ms 00:24:09.296 [2024-11-26 01:09:32.190188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.191731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.296 [2024-11-26 01:09:32.191892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:09.296 [2024-11-26 01:09:32.191910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:24:09.296 [2024-11-26 01:09:32.191917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.296 [2024-11-26 01:09:32.191949] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:09.296 [2024-11-26 01:09:32.191965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107264 / 261120 wr_cnt: 1 state: open 00:24:09.296 [2024-11-26 01:09:32.191976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.191984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.191992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:09.296 [2024-11-26 01:09:32.192236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:09.297 [2024-11-26 01:09:32.192789] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:09.297 [2024-11-26 01:09:32.192797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24fe6672-7f6f-41a2-a551-9dd1d146a529 00:24:09.297 [2024-11-26 01:09:32.192813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107264 00:24:09.297 [2024-11-26 01:09:32.192828] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108224 00:24:09.297 [2024-11-26 01:09:32.192853] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107264 00:24:09.297 [2024-11-26 01:09:32.192862] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:24:09.297 [2024-11-26 01:09:32.192870] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:09.297 [2024-11-26 01:09:32.192879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:09.297 [2024-11-26 01:09:32.192888] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:09.297 [2024-11-26 01:09:32.192896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:09.297 [2024-11-26 01:09:32.192903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:09.297 [2024-11-26 01:09:32.192913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.297 [2024-11-26 01:09:32.192926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:09.297 [2024-11-26 01:09:32.192936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.964 ms 00:24:09.297 [2024-11-26 01:09:32.192944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.297 [2024-11-26 01:09:32.195348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.297 [2024-11-26 01:09:32.195380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:09.297 [2024-11-26 01:09:32.195390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.387 ms 00:24:09.297 [2024-11-26 01:09:32.195399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.297 [2024-11-26 01:09:32.195516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:09.297 [2024-11-26 01:09:32.195525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:09.297 [2024-11-26 01:09:32.195535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:24:09.297 [2024-11-26 01:09:32.195545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.297 [2024-11-26 01:09:32.203424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.297 [2024-11-26 01:09:32.203479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:09.297 [2024-11-26 01:09:32.203490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.297 [2024-11-26 01:09:32.203497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.297 [2024-11-26 01:09:32.203557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.297 [2024-11-26 01:09:32.203565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:09.297 [2024-11-26 01:09:32.203574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.298 [2024-11-26 01:09:32.203586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.298 [2024-11-26 01:09:32.203650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.298 [2024-11-26 01:09:32.203661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:09.298 [2024-11-26 01:09:32.203669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.298 [2024-11-26 01:09:32.203676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.298 [2024-11-26 01:09:32.203691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.298 [2024-11-26 01:09:32.203704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:09.298 [2024-11-26 01:09:32.203713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.298 [2024-11-26 01:09:32.203721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.559 [2024-11-26 01:09:32.218507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.559 [2024-11-26 01:09:32.218701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:09.559 [2024-11-26 01:09:32.218726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.559 [2024-11-26 01:09:32.218740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.559 [2024-11-26 01:09:32.229820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.559 [2024-11-26 01:09:32.229883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:09.559 [2024-11-26 01:09:32.229896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.559 [2024-11-26 01:09:32.229905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.559 [2024-11-26 01:09:32.229970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.559 [2024-11-26 01:09:32.229980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:09.559 [2024-11-26 01:09:32.229989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.559 [2024-11-26 01:09:32.229997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.559 [2024-11-26 01:09:32.230067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.559 [2024-11-26 01:09:32.230078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:09.559 [2024-11-26 01:09:32.230087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.559 [2024-11-26 01:09:32.230095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.560 [2024-11-26 01:09:32.230171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.560 [2024-11-26 01:09:32.230182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:09.560 [2024-11-26 01:09:32.230191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.560 [2024-11-26 01:09:32.230199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.560 [2024-11-26 01:09:32.230235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.560 [2024-11-26 01:09:32.230249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:09.560 [2024-11-26 01:09:32.230258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.560 [2024-11-26 01:09:32.230267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.560 [2024-11-26 01:09:32.230308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.560 [2024-11-26 01:09:32.230321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:09.560 [2024-11-26 01:09:32.230330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.560 [2024-11-26 01:09:32.230338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.560 [2024-11-26 01:09:32.230388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:09.560 [2024-11-26 01:09:32.230399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:09.560 [2024-11-26 01:09:32.230411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:09.560 [2024-11-26 01:09:32.230420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:09.560 [2024-11-26 01:09:32.230564] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 244.969 ms, result 0 00:24:10.498 00:24:10.498 00:24:10.498 01:09:33 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:10.498 [2024-11-26 01:09:33.210163] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:24:10.498 [2024-11-26 01:09:33.210275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92519 ] 00:24:10.498 [2024-11-26 01:09:33.343066] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:10.498 [2024-11-26 01:09:33.373022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.498 [2024-11-26 01:09:33.398721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.759 [2024-11-26 01:09:33.513005] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:10.759 [2024-11-26 01:09:33.513091] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:10.759 [2024-11-26 01:09:33.674995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.759 [2024-11-26 01:09:33.675204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:10.759 [2024-11-26 01:09:33.675229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:10.759 [2024-11-26 01:09:33.675238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.759 [2024-11-26 01:09:33.675315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.759 [2024-11-26 01:09:33.675327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.759 [2024-11-26 01:09:33.675336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:10.759 [2024-11-26 01:09:33.675347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.759 [2024-11-26 01:09:33.675370] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:10.759 [2024-11-26 01:09:33.675621] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:10.759 [2024-11-26 01:09:33.675637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.759 [2024-11-26 01:09:33.675649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.759 [2024-11-26 01:09:33.675659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:24:10.759 [2024-11-26 01:09:33.675669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.021 [2024-11-26 01:09:33.677411] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:11.021 [2024-11-26 01:09:33.681171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.021 [2024-11-26 01:09:33.681222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:11.022 [2024-11-26 01:09:33.681241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:24:11.022 [2024-11-26 01:09:33.681254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.022 [2024-11-26 01:09:33.681333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.022 [2024-11-26 01:09:33.681344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:11.022 [2024-11-26 01:09:33.681360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:11.022 [2024-11-26 01:09:33.681369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.022 [2024-11-26 01:09:33.689306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.022 [2024-11-26 01:09:33.689488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:11.022 [2024-11-26 01:09:33.689506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.892 ms 00:24:11.022 [2024-11-26 01:09:33.689516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.022 [2024-11-26 01:09:33.689622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.022 [2024-11-26 01:09:33.689633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:11.022 [2024-11-26 01:09:33.689641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:11.022 [2024-11-26 01:09:33.689649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.022 [2024-11-26 01:09:33.689707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.022 [2024-11-26 01:09:33.689720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:11.022 [2024-11-26 01:09:33.689730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:11.022 [2024-11-26 01:09:33.689740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.022 [2024-11-26 01:09:33.689764] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:11.022 [2024-11-26 01:09:33.691784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.022 [2024-11-26 01:09:33.691826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:11.022 [2024-11-26 01:09:33.691837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.026 ms 00:24:11.022 [2024-11-26 01:09:33.691881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.022 [2024-11-26 01:09:33.691925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.022 [2024-11-26 01:09:33.691933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:11.022 [2024-11-26 01:09:33.691945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:11.022 [2024-11-26 01:09:33.691953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.022 [2024-11-26 01:09:33.691975] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:11.022 [2024-11-26 01:09:33.691997] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:11.022 [2024-11-26 01:09:33.692033] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:11.022 [2024-11-26 01:09:33.692053] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:11.022 [2024-11-26 01:09:33.692164] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:11.022 [2024-11-26 01:09:33.692179] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:11.022 [2024-11-26 01:09:33.692193] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:11.022 [2024-11-26 01:09:33.692207] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692217] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692225] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:11.022 [2024-11-26 01:09:33.692233] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:11.022 [2024-11-26 01:09:33.692240] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:11.022 [2024-11-26 01:09:33.692249] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:11.022 [2024-11-26 01:09:33.692257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.022 [2024-11-26 01:09:33.692264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:11.022 [2024-11-26 01:09:33.692272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:24:11.022 [2024-11-26 01:09:33.692282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.022 [2024-11-26 01:09:33.692366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.022 [2024-11-26 01:09:33.692380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:11.022 [2024-11-26 01:09:33.692388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:11.022 [2024-11-26 01:09:33.692396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.022 [2024-11-26 01:09:33.692494] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:11.022 [2024-11-26 01:09:33.692505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:11.022 [2024-11-26 01:09:33.692514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:11.022 [2024-11-26 01:09:33.692544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:11.022 [2024-11-26 01:09:33.692574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.022 [2024-11-26 01:09:33.692591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:11.022 [2024-11-26 01:09:33.692598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:11.022 [2024-11-26 01:09:33.692606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:11.022 [2024-11-26 01:09:33.692616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:11.022 [2024-11-26 01:09:33.692625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:11.022 [2024-11-26 01:09:33.692632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:11.022 [2024-11-26 01:09:33.692649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:11.022 [2024-11-26 01:09:33.692674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:11.022 [2024-11-26 01:09:33.692697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:11.022 [2024-11-26 01:09:33.692722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:11.022 [2024-11-26 01:09:33.692750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:11.022 [2024-11-26 01:09:33.692775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.022 [2024-11-26 01:09:33.692791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:11.022 [2024-11-26 01:09:33.692798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:11.022 [2024-11-26 01:09:33.692806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:11.022 [2024-11-26 01:09:33.692813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:11.022 [2024-11-26 01:09:33.692821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:11.022 [2024-11-26 01:09:33.692828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:11.022 [2024-11-26 01:09:33.692860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:11.022 [2024-11-26 01:09:33.692867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692873] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:11.022 [2024-11-26 01:09:33.692885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:11.022 [2024-11-26 01:09:33.692897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:11.022 [2024-11-26 01:09:33.692917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:11.022 [2024-11-26 01:09:33.692925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:11.022 [2024-11-26 01:09:33.692934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:11.022 [2024-11-26 01:09:33.692942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:11.022 [2024-11-26 01:09:33.692949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:11.022 [2024-11-26 01:09:33.692956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:11.022 [2024-11-26 01:09:33.692965] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:11.022 [2024-11-26 01:09:33.692978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.022 [2024-11-26 01:09:33.692986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:11.023 [2024-11-26 01:09:33.692994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:11.023 [2024-11-26 01:09:33.693001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:11.023 [2024-11-26 01:09:33.693008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:11.023 [2024-11-26 01:09:33.693015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:11.023 [2024-11-26 01:09:33.693022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:11.023 [2024-11-26 01:09:33.693032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:11.023 [2024-11-26 01:09:33.693039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:11.023 [2024-11-26 01:09:33.693046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:11.023 [2024-11-26 01:09:33.693053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:11.023 [2024-11-26 01:09:33.693060] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:11.023 [2024-11-26 01:09:33.693068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:11.023 [2024-11-26 01:09:33.693075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:11.023 [2024-11-26 01:09:33.693083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:11.023 [2024-11-26 01:09:33.693089] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:11.023 [2024-11-26 01:09:33.693098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:11.023 [2024-11-26 01:09:33.693106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:11.023 [2024-11-26 01:09:33.693113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:11.023 [2024-11-26 01:09:33.693121] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:11.023 [2024-11-26 01:09:33.693128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:11.023 [2024-11-26 01:09:33.693136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.693144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:11.023 [2024-11-26 01:09:33.693154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:24:11.023 [2024-11-26 01:09:33.693165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.706933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.706978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:11.023 [2024-11-26 01:09:33.706991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.723 ms 00:24:11.023 [2024-11-26 01:09:33.707007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.707098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.707108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:11.023 [2024-11-26 01:09:33.707118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:11.023 [2024-11-26 01:09:33.707127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.732592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.732907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:11.023 [2024-11-26 01:09:33.732946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.400 ms 00:24:11.023 [2024-11-26 01:09:33.732966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.733068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.733090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:11.023 [2024-11-26 01:09:33.733109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:11.023 [2024-11-26 01:09:33.733133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.733802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.733907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:11.023 [2024-11-26 01:09:33.733948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:24:11.023 [2024-11-26 01:09:33.733965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.734272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.734311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:11.023 [2024-11-26 01:09:33.734338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:24:11.023 [2024-11-26 01:09:33.734356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.743454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.743501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:11.023 [2024-11-26 01:09:33.743511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.052 ms 00:24:11.023 [2024-11-26 01:09:33.743529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.747514] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:11.023 [2024-11-26 01:09:33.747564] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:11.023 [2024-11-26 01:09:33.747586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.747596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:11.023 [2024-11-26 01:09:33.747605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.943 ms 00:24:11.023 [2024-11-26 01:09:33.747613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.771729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.771942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:11.023 [2024-11-26 01:09:33.771967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.064 ms 00:24:11.023 [2024-11-26 01:09:33.771985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.775145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.775318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:11.023 [2024-11-26 01:09:33.775336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.982 ms 00:24:11.023 [2024-11-26 01:09:33.775344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.778182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.778231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:11.023 [2024-11-26 01:09:33.778242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:24:11.023 [2024-11-26 01:09:33.778249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.778594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.778609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:11.023 [2024-11-26 01:09:33.778622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:24:11.023 [2024-11-26 01:09:33.778632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.802760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.802988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:11.023 [2024-11-26 01:09:33.803059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.110 ms 00:24:11.023 [2024-11-26 01:09:33.803083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.811171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:11.023 [2024-11-26 01:09:33.814336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.814471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:11.023 [2024-11-26 01:09:33.814525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.201 ms 00:24:11.023 [2024-11-26 01:09:33.814548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.814645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.814675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:11.023 [2024-11-26 01:09:33.814697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:11.023 [2024-11-26 01:09:33.814715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.816480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.816632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:11.023 [2024-11-26 01:09:33.816649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.713 ms 00:24:11.023 [2024-11-26 01:09:33.816658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.816695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.816710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:11.023 [2024-11-26 01:09:33.816720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:11.023 [2024-11-26 01:09:33.816728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.816765] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:11.023 [2024-11-26 01:09:33.816776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.816784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:11.023 [2024-11-26 01:09:33.816795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:11.023 [2024-11-26 01:09:33.816804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.023 [2024-11-26 01:09:33.822230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.023 [2024-11-26 01:09:33.822279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:11.023 [2024-11-26 01:09:33.822291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.402 ms 00:24:11.023 [2024-11-26 01:09:33.822311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.024 [2024-11-26 01:09:33.822394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:11.024 [2024-11-26 01:09:33.822404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:11.024 [2024-11-26 01:09:33.822414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:11.024 [2024-11-26 01:09:33.822426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:11.024 [2024-11-26 01:09:33.823573] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 148.139 ms, result 0 00:24:12.405  [2024-11-26T01:09:36.268Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-26T01:09:37.213Z] Copying: 31/1024 [MB] (19 MBps) [2024-11-26T01:09:38.157Z] Copying: 49/1024 [MB] (17 MBps) [2024-11-26T01:09:39.098Z] Copying: 69/1024 [MB] (20 MBps) [2024-11-26T01:09:40.040Z] Copying: 90/1024 [MB] (20 MBps) [2024-11-26T01:09:41.427Z] Copying: 109/1024 [MB] (18 MBps) [2024-11-26T01:09:42.373Z] Copying: 121/1024 [MB] (11 MBps) [2024-11-26T01:09:43.320Z] Copying: 131/1024 [MB] (10 MBps) [2024-11-26T01:09:44.267Z] Copying: 141/1024 [MB] (10 MBps) [2024-11-26T01:09:45.212Z] Copying: 152/1024 [MB] (10 MBps) [2024-11-26T01:09:46.158Z] Copying: 164/1024 [MB] (11 MBps) [2024-11-26T01:09:47.103Z] Copying: 175/1024 [MB] (10 MBps) [2024-11-26T01:09:48.049Z] Copying: 185/1024 [MB] (10 MBps) [2024-11-26T01:09:49.433Z] Copying: 196/1024 [MB] (10 MBps) [2024-11-26T01:09:50.375Z] Copying: 207/1024 [MB] (10 MBps) [2024-11-26T01:09:51.317Z] Copying: 218/1024 [MB] (10 MBps) [2024-11-26T01:09:52.263Z] Copying: 237/1024 [MB] (18 MBps) [2024-11-26T01:09:53.208Z] Copying: 250/1024 [MB] (13 MBps) [2024-11-26T01:09:54.150Z] Copying: 262/1024 [MB] (12 MBps) [2024-11-26T01:09:55.095Z] Copying: 278/1024 [MB] (15 MBps) [2024-11-26T01:09:56.042Z] Copying: 295/1024 [MB] (16 MBps) [2024-11-26T01:09:57.067Z] Copying: 309/1024 [MB] (14 MBps) [2024-11-26T01:09:58.012Z] Copying: 329/1024 [MB] (20 MBps) [2024-11-26T01:09:59.398Z] Copying: 341/1024 [MB] (11 MBps) [2024-11-26T01:10:00.341Z] Copying: 355/1024 [MB] (13 MBps) [2024-11-26T01:10:01.284Z] Copying: 371/1024 [MB] (16 MBps) [2024-11-26T01:10:02.226Z] Copying: 385/1024 [MB] (13 MBps) [2024-11-26T01:10:03.169Z] Copying: 406/1024 [MB] (20 MBps) [2024-11-26T01:10:04.112Z] Copying: 427/1024 [MB] (21 MBps) [2024-11-26T01:10:05.085Z] Copying: 451/1024 [MB] (23 MBps) [2024-11-26T01:10:06.030Z] Copying: 464/1024 [MB] (13 MBps) [2024-11-26T01:10:07.420Z] Copying: 475/1024 [MB] (10 MBps) [2024-11-26T01:10:08.387Z] Copying: 486/1024 [MB] (10 MBps) [2024-11-26T01:10:09.332Z] Copying: 496/1024 [MB] (10 MBps) [2024-11-26T01:10:10.279Z] Copying: 507/1024 [MB] (10 MBps) [2024-11-26T01:10:11.227Z] Copying: 517/1024 [MB] (10 MBps) [2024-11-26T01:10:12.173Z] Copying: 528/1024 [MB] (11 MBps) [2024-11-26T01:10:13.117Z] Copying: 539/1024 [MB] (10 MBps) [2024-11-26T01:10:14.063Z] Copying: 550/1024 [MB] (10 MBps) [2024-11-26T01:10:15.453Z] Copying: 560/1024 [MB] (10 MBps) [2024-11-26T01:10:16.025Z] Copying: 571/1024 [MB] (10 MBps) [2024-11-26T01:10:17.414Z] Copying: 582/1024 [MB] (11 MBps) [2024-11-26T01:10:18.360Z] Copying: 593/1024 [MB] (10 MBps) [2024-11-26T01:10:19.307Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-26T01:10:20.244Z] Copying: 614/1024 [MB] (10 MBps) [2024-11-26T01:10:21.188Z] Copying: 629/1024 [MB] (14 MBps) [2024-11-26T01:10:22.133Z] Copying: 645/1024 [MB] (15 MBps) [2024-11-26T01:10:23.073Z] Copying: 655/1024 [MB] (10 MBps) [2024-11-26T01:10:24.016Z] Copying: 667/1024 [MB] (12 MBps) [2024-11-26T01:10:25.399Z] Copying: 678/1024 [MB] (11 MBps) [2024-11-26T01:10:26.341Z] Copying: 691/1024 [MB] (12 MBps) [2024-11-26T01:10:27.283Z] Copying: 704/1024 [MB] (13 MBps) [2024-11-26T01:10:28.233Z] Copying: 719/1024 [MB] (14 MBps) [2024-11-26T01:10:29.313Z] Copying: 731/1024 [MB] (11 MBps) [2024-11-26T01:10:30.254Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-26T01:10:31.196Z] Copying: 754/1024 [MB] (12 MBps) [2024-11-26T01:10:32.142Z] Copying: 770/1024 [MB] (15 MBps) [2024-11-26T01:10:33.089Z] Copying: 780/1024 [MB] (10 MBps) [2024-11-26T01:10:34.031Z] Copying: 790/1024 [MB] (10 MBps) [2024-11-26T01:10:35.416Z] Copying: 807/1024 [MB] (17 MBps) [2024-11-26T01:10:36.363Z] Copying: 823/1024 [MB] (15 MBps) [2024-11-26T01:10:37.307Z] Copying: 835/1024 [MB] (12 MBps) [2024-11-26T01:10:38.251Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-26T01:10:39.196Z] Copying: 858/1024 [MB] (12 MBps) [2024-11-26T01:10:40.138Z] Copying: 873/1024 [MB] (14 MBps) [2024-11-26T01:10:41.079Z] Copying: 883/1024 [MB] (10 MBps) [2024-11-26T01:10:42.019Z] Copying: 901/1024 [MB] (17 MBps) [2024-11-26T01:10:43.403Z] Copying: 912/1024 [MB] (11 MBps) [2024-11-26T01:10:44.346Z] Copying: 924/1024 [MB] (11 MBps) [2024-11-26T01:10:45.288Z] Copying: 947/1024 [MB] (22 MBps) [2024-11-26T01:10:46.233Z] Copying: 962/1024 [MB] (15 MBps) [2024-11-26T01:10:47.180Z] Copying: 983/1024 [MB] (20 MBps) [2024-11-26T01:10:48.125Z] Copying: 997/1024 [MB] (14 MBps) [2024-11-26T01:10:48.698Z] Copying: 1011/1024 [MB] (14 MBps) [2024-11-26T01:10:48.960Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-26 01:10:48.779526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.043 [2024-11-26 01:10:48.779630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:26.044 [2024-11-26 01:10:48.779652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:26.044 [2024-11-26 01:10:48.779665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.779702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:26.044 [2024-11-26 01:10:48.780531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.780563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:26.044 [2024-11-26 01:10:48.780589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:25:26.044 [2024-11-26 01:10:48.780611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.781351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.781539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:26.044 [2024-11-26 01:10:48.781695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:25:26.044 [2024-11-26 01:10:48.781756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.790470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.790651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:26.044 [2024-11-26 01:10:48.790736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.519 ms 00:25:26.044 [2024-11-26 01:10:48.790831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.795770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.795952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:26.044 [2024-11-26 01:10:48.796046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.857 ms 00:25:26.044 [2024-11-26 01:10:48.796088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.797824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.797995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:26.044 [2024-11-26 01:10:48.798017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:25:26.044 [2024-11-26 01:10:48.798026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.802410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.802463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:26.044 [2024-11-26 01:10:48.802473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.336 ms 00:25:26.044 [2024-11-26 01:10:48.802487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.869246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.869365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:26.044 [2024-11-26 01:10:48.869385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.716 ms 00:25:26.044 [2024-11-26 01:10:48.869394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.871207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.871241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:26.044 [2024-11-26 01:10:48.871259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:25:26.044 [2024-11-26 01:10:48.871265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.872412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.872524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:26.044 [2024-11-26 01:10:48.872540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.119 ms 00:25:26.044 [2024-11-26 01:10:48.872550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.874474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.874577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:26.044 [2024-11-26 01:10:48.874610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:25:26.044 [2024-11-26 01:10:48.874632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.876498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.044 [2024-11-26 01:10:48.876575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:26.044 [2024-11-26 01:10:48.876600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:25:26.044 [2024-11-26 01:10:48.876618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.044 [2024-11-26 01:10:48.876681] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:26.044 [2024-11-26 01:10:48.876715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:26.044 [2024-11-26 01:10:48.876745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.876984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:26.044 [2024-11-26 01:10:48.877762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.877984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:26.045 [2024-11-26 01:10:48.878881] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:26.045 [2024-11-26 01:10:48.878902] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 24fe6672-7f6f-41a2-a551-9dd1d146a529 00:25:26.045 [2024-11-26 01:10:48.878924] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:26.045 [2024-11-26 01:10:48.878960] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 24768 00:25:26.045 [2024-11-26 01:10:48.878991] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 23808 00:25:26.045 [2024-11-26 01:10:48.879012] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0403 00:25:26.045 [2024-11-26 01:10:48.879039] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:26.045 [2024-11-26 01:10:48.879060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:26.045 [2024-11-26 01:10:48.879079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:26.045 [2024-11-26 01:10:48.879097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:26.045 [2024-11-26 01:10:48.879114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:26.045 [2024-11-26 01:10:48.879133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.045 [2024-11-26 01:10:48.879154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:26.045 [2024-11-26 01:10:48.879186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:25:26.045 [2024-11-26 01:10:48.879206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.045 [2024-11-26 01:10:48.882200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.045 [2024-11-26 01:10:48.882429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:26.045 [2024-11-26 01:10:48.882658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:25:26.045 [2024-11-26 01:10:48.882749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.045 [2024-11-26 01:10:48.882993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:26.045 [2024-11-26 01:10:48.883180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:26.045 [2024-11-26 01:10:48.883246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:25:26.045 [2024-11-26 01:10:48.883314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.045 [2024-11-26 01:10:48.889769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.045 [2024-11-26 01:10:48.889917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:26.045 [2024-11-26 01:10:48.889968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.045 [2024-11-26 01:10:48.889990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.045 [2024-11-26 01:10:48.890073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.045 [2024-11-26 01:10:48.890097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:26.045 [2024-11-26 01:10:48.890116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.045 [2024-11-26 01:10:48.890142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.045 [2024-11-26 01:10:48.890191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.045 [2024-11-26 01:10:48.890333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:26.045 [2024-11-26 01:10:48.890363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.045 [2024-11-26 01:10:48.890382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.045 [2024-11-26 01:10:48.890409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.045 [2024-11-26 01:10:48.890430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:26.045 [2024-11-26 01:10:48.890511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.045 [2024-11-26 01:10:48.890533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.045 [2024-11-26 01:10:48.899636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.045 [2024-11-26 01:10:48.899764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:26.045 [2024-11-26 01:10:48.899816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.045 [2024-11-26 01:10:48.899851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.045 [2024-11-26 01:10:48.907532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.045 [2024-11-26 01:10:48.907667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:26.045 [2024-11-26 01:10:48.907716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.045 [2024-11-26 01:10:48.907738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.045 [2024-11-26 01:10:48.907815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.045 [2024-11-26 01:10:48.907854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:26.046 [2024-11-26 01:10:48.907930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.046 [2024-11-26 01:10:48.907984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.046 [2024-11-26 01:10:48.908027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.046 [2024-11-26 01:10:48.908066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:26.046 [2024-11-26 01:10:48.908089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.046 [2024-11-26 01:10:48.908132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.046 [2024-11-26 01:10:48.908223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.046 [2024-11-26 01:10:48.908248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:26.046 [2024-11-26 01:10:48.908303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.046 [2024-11-26 01:10:48.908325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.046 [2024-11-26 01:10:48.908370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.046 [2024-11-26 01:10:48.908392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:26.046 [2024-11-26 01:10:48.908434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.046 [2024-11-26 01:10:48.908454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.046 [2024-11-26 01:10:48.908504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.046 [2024-11-26 01:10:48.908532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:26.046 [2024-11-26 01:10:48.908551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.046 [2024-11-26 01:10:48.908596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.046 [2024-11-26 01:10:48.908654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:26.046 [2024-11-26 01:10:48.908677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:26.046 [2024-11-26 01:10:48.908697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:26.046 [2024-11-26 01:10:48.908796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:26.046 [2024-11-26 01:10:48.908943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 129.406 ms, result 0 00:25:26.307 00:25:26.307 00:25:26.307 01:10:49 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:28.853 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:28.854 Process with pid 90047 is not found 00:25:28.854 Remove shared memory files 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90047 00:25:28.854 01:10:51 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 90047 ']' 00:25:28.854 01:10:51 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 90047 00:25:28.854 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90047) - No such process 00:25:28.854 01:10:51 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 90047 is not found' 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:28.854 01:10:51 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:28.854 ************************************ 00:25:28.854 END TEST ftl_restore 00:25:28.854 ************************************ 00:25:28.854 00:25:28.854 real 5m17.676s 00:25:28.854 user 5m3.840s 00:25:28.854 sys 0m13.236s 00:25:28.854 01:10:51 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.854 01:10:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:28.854 01:10:51 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:28.854 01:10:51 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:28.854 01:10:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.854 01:10:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:28.854 ************************************ 00:25:28.854 START TEST ftl_dirty_shutdown 00:25:28.854 ************************************ 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:28.854 * Looking for test storage... 00:25:28.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:28.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.854 --rc genhtml_branch_coverage=1 00:25:28.854 --rc genhtml_function_coverage=1 00:25:28.854 --rc genhtml_legend=1 00:25:28.854 --rc geninfo_all_blocks=1 00:25:28.854 --rc geninfo_unexecuted_blocks=1 00:25:28.854 00:25:28.854 ' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:28.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.854 --rc genhtml_branch_coverage=1 00:25:28.854 --rc genhtml_function_coverage=1 00:25:28.854 --rc genhtml_legend=1 00:25:28.854 --rc geninfo_all_blocks=1 00:25:28.854 --rc geninfo_unexecuted_blocks=1 00:25:28.854 00:25:28.854 ' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:28.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.854 --rc genhtml_branch_coverage=1 00:25:28.854 --rc genhtml_function_coverage=1 00:25:28.854 --rc genhtml_legend=1 00:25:28.854 --rc geninfo_all_blocks=1 00:25:28.854 --rc geninfo_unexecuted_blocks=1 00:25:28.854 00:25:28.854 ' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:28.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.854 --rc genhtml_branch_coverage=1 00:25:28.854 --rc genhtml_function_coverage=1 00:25:28.854 --rc genhtml_legend=1 00:25:28.854 --rc geninfo_all_blocks=1 00:25:28.854 --rc geninfo_unexecuted_blocks=1 00:25:28.854 00:25:28.854 ' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:28.854 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=93383 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 93383 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93383 ']' 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.855 01:10:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:28.855 [2024-11-26 01:10:51.598815] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:25:28.855 [2024-11-26 01:10:51.599217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93383 ] 00:25:28.855 [2024-11-26 01:10:51.736004] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:28.855 [2024-11-26 01:10:51.766134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.116 [2024-11-26 01:10:51.795428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.690 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.690 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:29.690 01:10:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:29.690 01:10:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:29.690 01:10:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:29.690 01:10:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:29.690 01:10:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:29.690 01:10:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:29.951 01:10:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:29.951 01:10:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:29.951 01:10:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:29.951 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:29.951 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:29.951 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:29.951 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:29.951 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:30.212 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:30.212 { 00:25:30.212 "name": "nvme0n1", 00:25:30.212 "aliases": [ 00:25:30.212 "5c9c07f7-c3e0-4265-a632-5668a4a729a5" 00:25:30.212 ], 00:25:30.212 "product_name": "NVMe disk", 00:25:30.212 "block_size": 4096, 00:25:30.212 "num_blocks": 1310720, 00:25:30.212 "uuid": "5c9c07f7-c3e0-4265-a632-5668a4a729a5", 00:25:30.212 "numa_id": -1, 00:25:30.212 "assigned_rate_limits": { 00:25:30.212 "rw_ios_per_sec": 0, 00:25:30.212 "rw_mbytes_per_sec": 0, 00:25:30.212 "r_mbytes_per_sec": 0, 00:25:30.212 "w_mbytes_per_sec": 0 00:25:30.212 }, 00:25:30.212 "claimed": true, 00:25:30.212 "claim_type": "read_many_write_one", 00:25:30.212 "zoned": false, 00:25:30.212 "supported_io_types": { 00:25:30.212 "read": true, 00:25:30.212 "write": true, 00:25:30.212 "unmap": true, 00:25:30.212 "flush": true, 00:25:30.212 "reset": true, 00:25:30.212 "nvme_admin": true, 00:25:30.212 "nvme_io": true, 00:25:30.212 "nvme_io_md": false, 00:25:30.212 "write_zeroes": true, 00:25:30.212 "zcopy": false, 00:25:30.212 "get_zone_info": false, 00:25:30.212 "zone_management": false, 00:25:30.212 "zone_append": false, 00:25:30.212 "compare": true, 00:25:30.212 "compare_and_write": false, 00:25:30.212 "abort": true, 00:25:30.212 "seek_hole": false, 00:25:30.212 "seek_data": false, 00:25:30.212 "copy": true, 00:25:30.212 "nvme_iov_md": false 00:25:30.212 }, 00:25:30.212 "driver_specific": { 00:25:30.212 "nvme": [ 00:25:30.212 { 00:25:30.212 "pci_address": "0000:00:11.0", 00:25:30.213 "trid": { 00:25:30.213 "trtype": "PCIe", 00:25:30.213 "traddr": "0000:00:11.0" 00:25:30.213 }, 00:25:30.213 "ctrlr_data": { 00:25:30.213 "cntlid": 0, 00:25:30.213 "vendor_id": "0x1b36", 00:25:30.213 "model_number": "QEMU NVMe Ctrl", 00:25:30.213 "serial_number": "12341", 00:25:30.213 "firmware_revision": "8.0.0", 00:25:30.213 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:30.213 "oacs": { 00:25:30.213 "security": 0, 00:25:30.213 "format": 1, 00:25:30.213 "firmware": 0, 00:25:30.213 "ns_manage": 1 00:25:30.213 }, 00:25:30.213 "multi_ctrlr": false, 00:25:30.213 "ana_reporting": false 00:25:30.213 }, 00:25:30.213 "vs": { 00:25:30.213 "nvme_version": "1.4" 00:25:30.213 }, 00:25:30.213 "ns_data": { 00:25:30.213 "id": 1, 00:25:30.213 "can_share": false 00:25:30.213 } 00:25:30.213 } 00:25:30.213 ], 00:25:30.213 "mp_policy": "active_passive" 00:25:30.213 } 00:25:30.213 } 00:25:30.213 ]' 00:25:30.213 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:30.213 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:30.213 01:10:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:30.213 01:10:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:30.213 01:10:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:30.213 01:10:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:30.213 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:30.213 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:30.213 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:30.213 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:30.213 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:30.472 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=c28bb4a5-06d2-4976-b681-468bf012e66a 00:25:30.472 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:30.473 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c28bb4a5-06d2-4976-b681-468bf012e66a 00:25:30.731 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5d81099a-0295-4bf0-a16e-66d67035acdd 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5d81099a-0295-4bf0-a16e-66d67035acdd 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:30.990 01:10:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:31.249 { 00:25:31.249 "name": "1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e", 00:25:31.249 "aliases": [ 00:25:31.249 "lvs/nvme0n1p0" 00:25:31.249 ], 00:25:31.249 "product_name": "Logical Volume", 00:25:31.249 "block_size": 4096, 00:25:31.249 "num_blocks": 26476544, 00:25:31.249 "uuid": "1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e", 00:25:31.249 "assigned_rate_limits": { 00:25:31.249 "rw_ios_per_sec": 0, 00:25:31.249 "rw_mbytes_per_sec": 0, 00:25:31.249 "r_mbytes_per_sec": 0, 00:25:31.249 "w_mbytes_per_sec": 0 00:25:31.249 }, 00:25:31.249 "claimed": false, 00:25:31.249 "zoned": false, 00:25:31.249 "supported_io_types": { 00:25:31.249 "read": true, 00:25:31.249 "write": true, 00:25:31.249 "unmap": true, 00:25:31.249 "flush": false, 00:25:31.249 "reset": true, 00:25:31.249 "nvme_admin": false, 00:25:31.249 "nvme_io": false, 00:25:31.249 "nvme_io_md": false, 00:25:31.249 "write_zeroes": true, 00:25:31.249 "zcopy": false, 00:25:31.249 "get_zone_info": false, 00:25:31.249 "zone_management": false, 00:25:31.249 "zone_append": false, 00:25:31.249 "compare": false, 00:25:31.249 "compare_and_write": false, 00:25:31.249 "abort": false, 00:25:31.249 "seek_hole": true, 00:25:31.249 "seek_data": true, 00:25:31.249 "copy": false, 00:25:31.249 "nvme_iov_md": false 00:25:31.249 }, 00:25:31.249 "driver_specific": { 00:25:31.249 "lvol": { 00:25:31.249 "lvol_store_uuid": "5d81099a-0295-4bf0-a16e-66d67035acdd", 00:25:31.249 "base_bdev": "nvme0n1", 00:25:31.249 "thin_provision": true, 00:25:31.249 "num_allocated_clusters": 0, 00:25:31.249 "snapshot": false, 00:25:31.249 "clone": false, 00:25:31.249 "esnap_clone": false 00:25:31.249 } 00:25:31.249 } 00:25:31.249 } 00:25:31.249 ]' 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:31.249 01:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:31.508 01:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:31.508 01:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:31.508 01:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:31.508 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:31.508 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:31.508 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:31.508 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:31.508 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:31.766 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:31.766 { 00:25:31.766 "name": "1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e", 00:25:31.766 "aliases": [ 00:25:31.766 "lvs/nvme0n1p0" 00:25:31.766 ], 00:25:31.766 "product_name": "Logical Volume", 00:25:31.766 "block_size": 4096, 00:25:31.766 "num_blocks": 26476544, 00:25:31.766 "uuid": "1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e", 00:25:31.766 "assigned_rate_limits": { 00:25:31.766 "rw_ios_per_sec": 0, 00:25:31.766 "rw_mbytes_per_sec": 0, 00:25:31.766 "r_mbytes_per_sec": 0, 00:25:31.766 "w_mbytes_per_sec": 0 00:25:31.766 }, 00:25:31.766 "claimed": false, 00:25:31.766 "zoned": false, 00:25:31.766 "supported_io_types": { 00:25:31.766 "read": true, 00:25:31.766 "write": true, 00:25:31.766 "unmap": true, 00:25:31.766 "flush": false, 00:25:31.766 "reset": true, 00:25:31.766 "nvme_admin": false, 00:25:31.766 "nvme_io": false, 00:25:31.766 "nvme_io_md": false, 00:25:31.766 "write_zeroes": true, 00:25:31.766 "zcopy": false, 00:25:31.766 "get_zone_info": false, 00:25:31.766 "zone_management": false, 00:25:31.766 "zone_append": false, 00:25:31.766 "compare": false, 00:25:31.766 "compare_and_write": false, 00:25:31.766 "abort": false, 00:25:31.766 "seek_hole": true, 00:25:31.766 "seek_data": true, 00:25:31.766 "copy": false, 00:25:31.766 "nvme_iov_md": false 00:25:31.766 }, 00:25:31.766 "driver_specific": { 00:25:31.766 "lvol": { 00:25:31.766 "lvol_store_uuid": "5d81099a-0295-4bf0-a16e-66d67035acdd", 00:25:31.767 "base_bdev": "nvme0n1", 00:25:31.767 "thin_provision": true, 00:25:31.767 "num_allocated_clusters": 0, 00:25:31.767 "snapshot": false, 00:25:31.767 "clone": false, 00:25:31.767 "esnap_clone": false 00:25:31.767 } 00:25:31.767 } 00:25:31.767 } 00:25:31.767 ]' 00:25:31.767 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:31.767 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:31.767 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:31.767 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:31.767 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:31.767 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:31.767 01:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:31.767 01:10:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:32.025 01:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:32.025 01:10:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:32.025 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:32.025 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:32.025 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:32.025 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:32.025 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e 00:25:32.284 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:32.284 { 00:25:32.284 "name": "1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e", 00:25:32.284 "aliases": [ 00:25:32.284 "lvs/nvme0n1p0" 00:25:32.284 ], 00:25:32.284 "product_name": "Logical Volume", 00:25:32.284 "block_size": 4096, 00:25:32.284 "num_blocks": 26476544, 00:25:32.284 "uuid": "1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e", 00:25:32.284 "assigned_rate_limits": { 00:25:32.284 "rw_ios_per_sec": 0, 00:25:32.284 "rw_mbytes_per_sec": 0, 00:25:32.284 "r_mbytes_per_sec": 0, 00:25:32.284 "w_mbytes_per_sec": 0 00:25:32.284 }, 00:25:32.284 "claimed": false, 00:25:32.284 "zoned": false, 00:25:32.284 "supported_io_types": { 00:25:32.284 "read": true, 00:25:32.284 "write": true, 00:25:32.284 "unmap": true, 00:25:32.284 "flush": false, 00:25:32.284 "reset": true, 00:25:32.284 "nvme_admin": false, 00:25:32.284 "nvme_io": false, 00:25:32.284 "nvme_io_md": false, 00:25:32.284 "write_zeroes": true, 00:25:32.284 "zcopy": false, 00:25:32.284 "get_zone_info": false, 00:25:32.284 "zone_management": false, 00:25:32.284 "zone_append": false, 00:25:32.284 "compare": false, 00:25:32.284 "compare_and_write": false, 00:25:32.284 "abort": false, 00:25:32.284 "seek_hole": true, 00:25:32.284 "seek_data": true, 00:25:32.284 "copy": false, 00:25:32.284 "nvme_iov_md": false 00:25:32.284 }, 00:25:32.284 "driver_specific": { 00:25:32.284 "lvol": { 00:25:32.284 "lvol_store_uuid": "5d81099a-0295-4bf0-a16e-66d67035acdd", 00:25:32.284 "base_bdev": "nvme0n1", 00:25:32.284 "thin_provision": true, 00:25:32.284 "num_allocated_clusters": 0, 00:25:32.284 "snapshot": false, 00:25:32.284 "clone": false, 00:25:32.284 "esnap_clone": false 00:25:32.284 } 00:25:32.284 } 00:25:32.284 } 00:25:32.284 ]' 00:25:32.284 01:10:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:32.284 01:10:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:32.284 01:10:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:32.285 01:10:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:32.285 01:10:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:32.285 01:10:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:32.285 01:10:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:32.285 01:10:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e --l2p_dram_limit 10' 00:25:32.285 01:10:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:32.285 01:10:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:32.285 01:10:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:32.285 01:10:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1c9762e8-a0a9-4abb-b20e-f586d8ea7d7e --l2p_dram_limit 10 -c nvc0n1p0 00:25:32.546 [2024-11-26 01:10:55.232197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.232239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:32.546 [2024-11-26 01:10:55.232251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:32.546 [2024-11-26 01:10:55.232257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.232299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.232308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:32.546 [2024-11-26 01:10:55.232318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:32.546 [2024-11-26 01:10:55.232324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.232343] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:32.546 [2024-11-26 01:10:55.232557] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:32.546 [2024-11-26 01:10:55.232572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.232578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:32.546 [2024-11-26 01:10:55.232588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:25:32.546 [2024-11-26 01:10:55.232594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.232646] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9de5639c-e423-4dbf-b262-b7551a48204b 00:25:32.546 [2024-11-26 01:10:55.233626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.233653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:32.546 [2024-11-26 01:10:55.233662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:25:32.546 [2024-11-26 01:10:55.233669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.238438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.238466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:32.546 [2024-11-26 01:10:55.238474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.733 ms 00:25:32.546 [2024-11-26 01:10:55.238483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.238553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.238562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:32.546 [2024-11-26 01:10:55.238568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:32.546 [2024-11-26 01:10:55.238575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.238608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.238617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:32.546 [2024-11-26 01:10:55.238623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:32.546 [2024-11-26 01:10:55.238630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.238647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:32.546 [2024-11-26 01:10:55.239925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.239950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:32.546 [2024-11-26 01:10:55.239960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:25:32.546 [2024-11-26 01:10:55.239966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.239994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.240002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:32.546 [2024-11-26 01:10:55.240011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:32.546 [2024-11-26 01:10:55.240018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.240035] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:32.546 [2024-11-26 01:10:55.240145] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:32.546 [2024-11-26 01:10:55.240157] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:32.546 [2024-11-26 01:10:55.240166] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:32.546 [2024-11-26 01:10:55.240181] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:32.546 [2024-11-26 01:10:55.240189] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:32.546 [2024-11-26 01:10:55.240200] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:32.546 [2024-11-26 01:10:55.240206] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:32.546 [2024-11-26 01:10:55.240214] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:32.546 [2024-11-26 01:10:55.240220] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:32.546 [2024-11-26 01:10:55.240228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.240235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:32.546 [2024-11-26 01:10:55.240242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:25:32.546 [2024-11-26 01:10:55.240249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.240314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.546 [2024-11-26 01:10:55.240320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:32.546 [2024-11-26 01:10:55.240327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:32.546 [2024-11-26 01:10:55.240333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.546 [2024-11-26 01:10:55.240404] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:32.546 [2024-11-26 01:10:55.240411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:32.546 [2024-11-26 01:10:55.240418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:32.546 [2024-11-26 01:10:55.240426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.546 [2024-11-26 01:10:55.240433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:32.546 [2024-11-26 01:10:55.240438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:32.546 [2024-11-26 01:10:55.240444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:32.546 [2024-11-26 01:10:55.240449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:32.546 [2024-11-26 01:10:55.240456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:32.547 [2024-11-26 01:10:55.240468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:32.547 [2024-11-26 01:10:55.240473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:32.547 [2024-11-26 01:10:55.240481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:32.547 [2024-11-26 01:10:55.240486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:32.547 [2024-11-26 01:10:55.240494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:32.547 [2024-11-26 01:10:55.240499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:32.547 [2024-11-26 01:10:55.240511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:32.547 [2024-11-26 01:10:55.240517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:32.547 [2024-11-26 01:10:55.240528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.547 [2024-11-26 01:10:55.240538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:32.547 [2024-11-26 01:10:55.240544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.547 [2024-11-26 01:10:55.240555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:32.547 [2024-11-26 01:10:55.240560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.547 [2024-11-26 01:10:55.240572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:32.547 [2024-11-26 01:10:55.240577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:32.547 [2024-11-26 01:10:55.240588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:32.547 [2024-11-26 01:10:55.240594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:32.547 [2024-11-26 01:10:55.240604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:32.547 [2024-11-26 01:10:55.240609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:32.547 [2024-11-26 01:10:55.240617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:32.547 [2024-11-26 01:10:55.240621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:32.547 [2024-11-26 01:10:55.240628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:32.547 [2024-11-26 01:10:55.240632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:32.547 [2024-11-26 01:10:55.240643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:32.547 [2024-11-26 01:10:55.240649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240653] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:32.547 [2024-11-26 01:10:55.240661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:32.547 [2024-11-26 01:10:55.240666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:32.547 [2024-11-26 01:10:55.240674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:32.547 [2024-11-26 01:10:55.240681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:32.547 [2024-11-26 01:10:55.240687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:32.547 [2024-11-26 01:10:55.240692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:32.547 [2024-11-26 01:10:55.240699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:32.547 [2024-11-26 01:10:55.240704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:32.547 [2024-11-26 01:10:55.240710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:32.547 [2024-11-26 01:10:55.240717] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:32.547 [2024-11-26 01:10:55.240725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:32.547 [2024-11-26 01:10:55.240732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:32.547 [2024-11-26 01:10:55.240739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:32.547 [2024-11-26 01:10:55.240744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:32.547 [2024-11-26 01:10:55.240750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:32.547 [2024-11-26 01:10:55.240755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:32.547 [2024-11-26 01:10:55.240764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:32.547 [2024-11-26 01:10:55.240769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:32.547 [2024-11-26 01:10:55.240776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:32.547 [2024-11-26 01:10:55.240781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:32.547 [2024-11-26 01:10:55.240788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:32.547 [2024-11-26 01:10:55.240793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:32.547 [2024-11-26 01:10:55.240800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:32.547 [2024-11-26 01:10:55.240805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:32.547 [2024-11-26 01:10:55.240811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:32.547 [2024-11-26 01:10:55.240816] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:32.547 [2024-11-26 01:10:55.240823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:32.547 [2024-11-26 01:10:55.240829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:32.547 [2024-11-26 01:10:55.240835] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:32.547 [2024-11-26 01:10:55.240861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:32.547 [2024-11-26 01:10:55.240871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:32.547 [2024-11-26 01:10:55.240877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:32.547 [2024-11-26 01:10:55.240885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:32.547 [2024-11-26 01:10:55.240908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:25:32.547 [2024-11-26 01:10:55.240915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:32.547 [2024-11-26 01:10:55.240950] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:32.547 [2024-11-26 01:10:55.240962] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:35.084 [2024-11-26 01:10:57.621708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.084 [2024-11-26 01:10:57.621776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:35.084 [2024-11-26 01:10:57.621791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2380.748 ms 00:25:35.084 [2024-11-26 01:10:57.621802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.084 [2024-11-26 01:10:57.631685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.084 [2024-11-26 01:10:57.631732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:35.084 [2024-11-26 01:10:57.631744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.770 ms 00:25:35.084 [2024-11-26 01:10:57.631759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.084 [2024-11-26 01:10:57.631883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.084 [2024-11-26 01:10:57.631895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:35.084 [2024-11-26 01:10:57.631909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:35.084 [2024-11-26 01:10:57.631918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.084 [2024-11-26 01:10:57.641647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.084 [2024-11-26 01:10:57.641691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:35.084 [2024-11-26 01:10:57.641702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.691 ms 00:25:35.084 [2024-11-26 01:10:57.641714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.084 [2024-11-26 01:10:57.641742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.084 [2024-11-26 01:10:57.641753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:35.084 [2024-11-26 01:10:57.641762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:35.084 [2024-11-26 01:10:57.641772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.084 [2024-11-26 01:10:57.642226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.084 [2024-11-26 01:10:57.642250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:35.084 [2024-11-26 01:10:57.642261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:25:35.084 [2024-11-26 01:10:57.642273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.084 [2024-11-26 01:10:57.642400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.084 [2024-11-26 01:10:57.642421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:35.084 [2024-11-26 01:10:57.642430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:25:35.084 [2024-11-26 01:10:57.642442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.084 [2024-11-26 01:10:57.648927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.084 [2024-11-26 01:10:57.648964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:35.084 [2024-11-26 01:10:57.648974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.466 ms 00:25:35.084 [2024-11-26 01:10:57.648983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.084 [2024-11-26 01:10:57.657896] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:35.084 [2024-11-26 01:10:57.661114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.084 [2024-11-26 01:10:57.661153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:35.084 [2024-11-26 01:10:57.661165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.063 ms 00:25:35.084 [2024-11-26 01:10:57.661173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.084 [2024-11-26 01:10:57.727315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.727413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:35.085 [2024-11-26 01:10:57.727450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.100 ms 00:25:35.085 [2024-11-26 01:10:57.727468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.727914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.727942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:35.085 [2024-11-26 01:10:57.727967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:25:35.085 [2024-11-26 01:10:57.727983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.734335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.734406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:35.085 [2024-11-26 01:10:57.734438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.299 ms 00:25:35.085 [2024-11-26 01:10:57.734456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.739783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.739825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:35.085 [2024-11-26 01:10:57.739868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.251 ms 00:25:35.085 [2024-11-26 01:10:57.739877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.740197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.740214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:35.085 [2024-11-26 01:10:57.740227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:25:35.085 [2024-11-26 01:10:57.740234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.773744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.773963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:35.085 [2024-11-26 01:10:57.773992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.484 ms 00:25:35.085 [2024-11-26 01:10:57.774000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.780125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.780168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:35.085 [2024-11-26 01:10:57.780182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.051 ms 00:25:35.085 [2024-11-26 01:10:57.780190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.785085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.785235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:35.085 [2024-11-26 01:10:57.785257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.848 ms 00:25:35.085 [2024-11-26 01:10:57.785265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.789803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.789863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:35.085 [2024-11-26 01:10:57.789880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.445 ms 00:25:35.085 [2024-11-26 01:10:57.789887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.789936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.789946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:35.085 [2024-11-26 01:10:57.789957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:35.085 [2024-11-26 01:10:57.789965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.790058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:35.085 [2024-11-26 01:10:57.790068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:35.085 [2024-11-26 01:10:57.790081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:35.085 [2024-11-26 01:10:57.790089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:35.085 [2024-11-26 01:10:57.791149] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2558.494 ms, result 0 00:25:35.085 { 00:25:35.085 "name": "ftl0", 00:25:35.085 "uuid": "9de5639c-e423-4dbf-b262-b7551a48204b" 00:25:35.085 } 00:25:35.085 01:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:35.085 01:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:35.347 01:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:35.347 01:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:35.347 01:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:35.347 /dev/nbd0 00:25:35.347 01:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:35.347 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:35.347 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:35.347 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:35.347 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:35.347 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:35.607 1+0 records in 00:25:35.607 1+0 records out 00:25:35.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498585 s, 8.2 MB/s 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:35.607 01:10:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:35.607 [2024-11-26 01:10:58.343338] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:25:35.607 [2024-11-26 01:10:58.343476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93508 ] 00:25:35.607 [2024-11-26 01:10:58.485175] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:35.607 [2024-11-26 01:10:58.516365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.867 [2024-11-26 01:10:58.544801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.827  [2024-11-26T01:11:00.765Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-26T01:11:01.700Z] Copying: 400/1024 [MB] (210 MBps) [2024-11-26T01:11:02.640Z] Copying: 664/1024 [MB] (263 MBps) [2024-11-26T01:11:03.209Z] Copying: 920/1024 [MB] (256 MBps) [2024-11-26T01:11:03.209Z] Copying: 1024/1024 [MB] (average 232 MBps) 00:25:40.292 00:25:40.292 01:11:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:42.204 01:11:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:42.204 [2024-11-26 01:11:05.086779] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:25:42.204 [2024-11-26 01:11:05.086883] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93580 ] 00:25:42.464 [2024-11-26 01:11:05.211704] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:42.464 [2024-11-26 01:11:05.241721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.464 [2024-11-26 01:11:05.260059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:43.403  [2024-11-26T01:11:07.696Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-26T01:11:08.629Z] Copying: 46/1024 [MB] (29 MBps) [2024-11-26T01:11:09.561Z] Copying: 76/1024 [MB] (30 MBps) [2024-11-26T01:11:10.492Z] Copying: 114/1024 [MB] (37 MBps) [2024-11-26T01:11:11.433Z] Copying: 146/1024 [MB] (32 MBps) [2024-11-26T01:11:12.366Z] Copying: 180/1024 [MB] (33 MBps) [2024-11-26T01:11:13.739Z] Copying: 212/1024 [MB] (32 MBps) [2024-11-26T01:11:14.671Z] Copying: 247/1024 [MB] (34 MBps) [2024-11-26T01:11:15.604Z] Copying: 279/1024 [MB] (32 MBps) [2024-11-26T01:11:16.537Z] Copying: 314/1024 [MB] (35 MBps) [2024-11-26T01:11:17.471Z] Copying: 346/1024 [MB] (31 MBps) [2024-11-26T01:11:18.404Z] Copying: 378/1024 [MB] (31 MBps) [2024-11-26T01:11:19.337Z] Copying: 410/1024 [MB] (31 MBps) [2024-11-26T01:11:20.709Z] Copying: 443/1024 [MB] (33 MBps) [2024-11-26T01:11:21.644Z] Copying: 474/1024 [MB] (31 MBps) [2024-11-26T01:11:22.578Z] Copying: 502/1024 [MB] (27 MBps) [2024-11-26T01:11:23.512Z] Copying: 540/1024 [MB] (37 MBps) [2024-11-26T01:11:24.444Z] Copying: 577/1024 [MB] (37 MBps) [2024-11-26T01:11:25.375Z] Copying: 609/1024 [MB] (32 MBps) [2024-11-26T01:11:26.746Z] Copying: 641/1024 [MB] (31 MBps) [2024-11-26T01:11:27.311Z] Copying: 677/1024 [MB] (36 MBps) [2024-11-26T01:11:28.686Z] Copying: 714/1024 [MB] (37 MBps) [2024-11-26T01:11:29.619Z] Copying: 752/1024 [MB] (37 MBps) [2024-11-26T01:11:30.552Z] Copying: 790/1024 [MB] (37 MBps) [2024-11-26T01:11:31.486Z] Copying: 827/1024 [MB] (37 MBps) [2024-11-26T01:11:32.530Z] Copying: 860/1024 [MB] (33 MBps) [2024-11-26T01:11:33.464Z] Copying: 891/1024 [MB] (30 MBps) [2024-11-26T01:11:34.394Z] Copying: 929/1024 [MB] (37 MBps) [2024-11-26T01:11:35.328Z] Copying: 966/1024 [MB] (37 MBps) [2024-11-26T01:11:35.896Z] Copying: 1004/1024 [MB] (37 MBps) [2024-11-26T01:11:36.156Z] Copying: 1024/1024 [MB] (average 33 MBps) 00:26:13.239 00:26:13.239 01:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:13.239 01:11:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:13.501 01:11:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:13.501 [2024-11-26 01:11:36.358642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.358815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:13.501 [2024-11-26 01:11:36.358834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:13.501 [2024-11-26 01:11:36.358855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.358879] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:13.501 [2024-11-26 01:11:36.359409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.359424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:13.501 [2024-11-26 01:11:36.359440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:26:13.501 [2024-11-26 01:11:36.359446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.361961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.362020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:13.501 [2024-11-26 01:11:36.362029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.495 ms 00:26:13.501 [2024-11-26 01:11:36.362046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.378108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.378136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:13.501 [2024-11-26 01:11:36.378147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.046 ms 00:26:13.501 [2024-11-26 01:11:36.378153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.382879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.382979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:13.501 [2024-11-26 01:11:36.382994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.695 ms 00:26:13.501 [2024-11-26 01:11:36.383001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.385234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.385261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:13.501 [2024-11-26 01:11:36.385270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:26:13.501 [2024-11-26 01:11:36.385276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.390329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.390432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:13.501 [2024-11-26 01:11:36.390447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.022 ms 00:26:13.501 [2024-11-26 01:11:36.390455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.390549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.390557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:13.501 [2024-11-26 01:11:36.390568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:13.501 [2024-11-26 01:11:36.390575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.393157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.393182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:13.501 [2024-11-26 01:11:36.393191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.565 ms 00:26:13.501 [2024-11-26 01:11:36.393197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.395043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.501 [2024-11-26 01:11:36.395068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:13.501 [2024-11-26 01:11:36.395078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.817 ms 00:26:13.501 [2024-11-26 01:11:36.395084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.501 [2024-11-26 01:11:36.396569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.502 [2024-11-26 01:11:36.396592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:13.502 [2024-11-26 01:11:36.396601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.456 ms 00:26:13.502 [2024-11-26 01:11:36.396606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.502 [2024-11-26 01:11:36.398083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.502 [2024-11-26 01:11:36.398162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:13.502 [2024-11-26 01:11:36.398203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:26:13.502 [2024-11-26 01:11:36.398220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.502 [2024-11-26 01:11:36.398315] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:13.502 [2024-11-26 01:11:36.398342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.398998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:13.502 [2024-11-26 01:11:36.399093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:13.503 [2024-11-26 01:11:36.399220] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:13.503 [2024-11-26 01:11:36.399228] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9de5639c-e423-4dbf-b262-b7551a48204b 00:26:13.503 [2024-11-26 01:11:36.399235] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:13.503 [2024-11-26 01:11:36.399242] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:13.503 [2024-11-26 01:11:36.399248] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:13.503 [2024-11-26 01:11:36.399259] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:13.503 [2024-11-26 01:11:36.399265] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:13.503 [2024-11-26 01:11:36.399272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:13.503 [2024-11-26 01:11:36.399278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:13.503 [2024-11-26 01:11:36.399284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:13.503 [2024-11-26 01:11:36.399289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:13.503 [2024-11-26 01:11:36.399296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.503 [2024-11-26 01:11:36.399303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:13.503 [2024-11-26 01:11:36.399316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:26:13.503 [2024-11-26 01:11:36.399321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.503 [2024-11-26 01:11:36.401183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.503 [2024-11-26 01:11:36.401219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:13.503 [2024-11-26 01:11:36.401238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:26:13.503 [2024-11-26 01:11:36.401253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.503 [2024-11-26 01:11:36.401357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.503 [2024-11-26 01:11:36.401378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:13.503 [2024-11-26 01:11:36.401511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:13.503 [2024-11-26 01:11:36.401529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.503 [2024-11-26 01:11:36.407467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.503 [2024-11-26 01:11:36.407557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:13.503 [2024-11-26 01:11:36.407598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.503 [2024-11-26 01:11:36.407615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.503 [2024-11-26 01:11:36.407674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.503 [2024-11-26 01:11:36.407696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:13.503 [2024-11-26 01:11:36.407714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.503 [2024-11-26 01:11:36.407729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.503 [2024-11-26 01:11:36.407831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.503 [2024-11-26 01:11:36.407867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:13.503 [2024-11-26 01:11:36.407885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.503 [2024-11-26 01:11:36.407928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.503 [2024-11-26 01:11:36.407957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.503 [2024-11-26 01:11:36.407974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:13.503 [2024-11-26 01:11:36.407994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.503 [2024-11-26 01:11:36.408009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.764 [2024-11-26 01:11:36.418832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.764 [2024-11-26 01:11:36.418959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:13.764 [2024-11-26 01:11:36.418998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.764 [2024-11-26 01:11:36.419016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.764 [2024-11-26 01:11:36.427821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.764 [2024-11-26 01:11:36.427942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:13.764 [2024-11-26 01:11:36.428012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.764 [2024-11-26 01:11:36.428035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.764 [2024-11-26 01:11:36.428119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.764 [2024-11-26 01:11:36.428140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:13.764 [2024-11-26 01:11:36.428157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.764 [2024-11-26 01:11:36.428172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.764 [2024-11-26 01:11:36.428251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.764 [2024-11-26 01:11:36.428271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:13.764 [2024-11-26 01:11:36.428289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.764 [2024-11-26 01:11:36.428306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.764 [2024-11-26 01:11:36.428404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.764 [2024-11-26 01:11:36.428426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:13.764 [2024-11-26 01:11:36.428444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.764 [2024-11-26 01:11:36.428463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.764 [2024-11-26 01:11:36.428527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.764 [2024-11-26 01:11:36.428548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:13.764 [2024-11-26 01:11:36.428565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.764 [2024-11-26 01:11:36.428580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.764 [2024-11-26 01:11:36.428631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.764 [2024-11-26 01:11:36.428649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:13.764 [2024-11-26 01:11:36.428667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.764 [2024-11-26 01:11:36.428686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.764 [2024-11-26 01:11:36.428739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.764 [2024-11-26 01:11:36.428758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:13.764 [2024-11-26 01:11:36.428775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.764 [2024-11-26 01:11:36.428793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.764 [2024-11-26 01:11:36.428932] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.252 ms, result 0 00:26:13.764 true 00:26:13.764 01:11:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 93383 00:26:13.764 01:11:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid93383 00:26:13.764 01:11:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:13.764 [2024-11-26 01:11:36.516502] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:26:13.764 [2024-11-26 01:11:36.516742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93917 ] 00:26:13.764 [2024-11-26 01:11:36.649838] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:13.764 [2024-11-26 01:11:36.678350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.025 [2024-11-26 01:11:36.717465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.968  [2024-11-26T01:11:38.828Z] Copying: 213/1024 [MB] (213 MBps) [2024-11-26T01:11:40.226Z] Copying: 466/1024 [MB] (252 MBps) [2024-11-26T01:11:41.163Z] Copying: 656/1024 [MB] (190 MBps) [2024-11-26T01:11:41.732Z] Copying: 859/1024 [MB] (202 MBps) [2024-11-26T01:11:41.732Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:26:18.815 00:26:18.815 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 93383 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:18.815 01:11:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:18.815 [2024-11-26 01:11:41.663970] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:26:18.815 [2024-11-26 01:11:41.664248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93970 ] 00:26:19.075 [2024-11-26 01:11:41.797109] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:19.075 [2024-11-26 01:11:41.822545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.075 [2024-11-26 01:11:41.843976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.075 [2024-11-26 01:11:41.927237] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.075 [2024-11-26 01:11:41.927403] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:19.075 [2024-11-26 01:11:41.988782] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:19.075 [2024-11-26 01:11:41.989070] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:19.075 [2024-11-26 01:11:41.989369] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:19.335 [2024-11-26 01:11:42.149035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.149067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:19.335 [2024-11-26 01:11:42.149078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:19.335 [2024-11-26 01:11:42.149084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.149121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.149130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:19.335 [2024-11-26 01:11:42.149136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:19.335 [2024-11-26 01:11:42.149142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.149154] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:19.335 [2024-11-26 01:11:42.149322] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:19.335 [2024-11-26 01:11:42.149332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.149340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:19.335 [2024-11-26 01:11:42.149346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:26:19.335 [2024-11-26 01:11:42.149352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.150278] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:19.335 [2024-11-26 01:11:42.152161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.152189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:19.335 [2024-11-26 01:11:42.152196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.884 ms 00:26:19.335 [2024-11-26 01:11:42.152206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.152251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.152259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:19.335 [2024-11-26 01:11:42.152265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:26:19.335 [2024-11-26 01:11:42.152272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.156521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.156545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:19.335 [2024-11-26 01:11:42.156552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.220 ms 00:26:19.335 [2024-11-26 01:11:42.156558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.156619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.156626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:19.335 [2024-11-26 01:11:42.156632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:19.335 [2024-11-26 01:11:42.156639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.156672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.156682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:19.335 [2024-11-26 01:11:42.156689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:19.335 [2024-11-26 01:11:42.156694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.156707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:19.335 [2024-11-26 01:11:42.157859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.157880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:19.335 [2024-11-26 01:11:42.157889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:26:19.335 [2024-11-26 01:11:42.157895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.157919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.157926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:19.335 [2024-11-26 01:11:42.157932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:19.335 [2024-11-26 01:11:42.157937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.157955] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:19.335 [2024-11-26 01:11:42.157969] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:19.335 [2024-11-26 01:11:42.157997] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:19.335 [2024-11-26 01:11:42.158010] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:19.335 [2024-11-26 01:11:42.158104] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:19.335 [2024-11-26 01:11:42.158113] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:19.335 [2024-11-26 01:11:42.158121] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:19.335 [2024-11-26 01:11:42.158129] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:19.335 [2024-11-26 01:11:42.158135] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:19.335 [2024-11-26 01:11:42.158144] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:19.335 [2024-11-26 01:11:42.158150] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:19.335 [2024-11-26 01:11:42.158155] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:19.335 [2024-11-26 01:11:42.158162] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:19.335 [2024-11-26 01:11:42.158168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.158173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:19.335 [2024-11-26 01:11:42.158179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:26:19.335 [2024-11-26 01:11:42.158184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.158248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.335 [2024-11-26 01:11:42.158255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:19.335 [2024-11-26 01:11:42.158260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:19.335 [2024-11-26 01:11:42.158266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.335 [2024-11-26 01:11:42.158340] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:19.335 [2024-11-26 01:11:42.158348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:19.335 [2024-11-26 01:11:42.158358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:19.335 [2024-11-26 01:11:42.158365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.335 [2024-11-26 01:11:42.158370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:19.335 [2024-11-26 01:11:42.158375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:19.335 [2024-11-26 01:11:42.158380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:19.336 [2024-11-26 01:11:42.158386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:19.336 [2024-11-26 01:11:42.158391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:19.336 [2024-11-26 01:11:42.158401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:19.336 [2024-11-26 01:11:42.158407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:19.336 [2024-11-26 01:11:42.158416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:19.336 [2024-11-26 01:11:42.158421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:19.336 [2024-11-26 01:11:42.158426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:19.336 [2024-11-26 01:11:42.158431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:19.336 [2024-11-26 01:11:42.158441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:19.336 [2024-11-26 01:11:42.158446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:19.336 [2024-11-26 01:11:42.158457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.336 [2024-11-26 01:11:42.158467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:19.336 [2024-11-26 01:11:42.158472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.336 [2024-11-26 01:11:42.158482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:19.336 [2024-11-26 01:11:42.158487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.336 [2024-11-26 01:11:42.158502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:19.336 [2024-11-26 01:11:42.158509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:19.336 [2024-11-26 01:11:42.158520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:19.336 [2024-11-26 01:11:42.158526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:19.336 [2024-11-26 01:11:42.158537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:19.336 [2024-11-26 01:11:42.158543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:19.336 [2024-11-26 01:11:42.158548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:19.336 [2024-11-26 01:11:42.158554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:19.336 [2024-11-26 01:11:42.158560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:19.336 [2024-11-26 01:11:42.158565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:19.336 [2024-11-26 01:11:42.158576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:19.336 [2024-11-26 01:11:42.158582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158590] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:19.336 [2024-11-26 01:11:42.158600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:19.336 [2024-11-26 01:11:42.158606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:19.336 [2024-11-26 01:11:42.158612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:19.336 [2024-11-26 01:11:42.158618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:19.336 [2024-11-26 01:11:42.158624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:19.336 [2024-11-26 01:11:42.158630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:19.336 [2024-11-26 01:11:42.158635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:19.336 [2024-11-26 01:11:42.158639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:19.336 [2024-11-26 01:11:42.158644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:19.336 [2024-11-26 01:11:42.158650] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:19.336 [2024-11-26 01:11:42.158659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:19.336 [2024-11-26 01:11:42.158667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:19.336 [2024-11-26 01:11:42.158672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:19.336 [2024-11-26 01:11:42.158677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:19.336 [2024-11-26 01:11:42.158682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:19.336 [2024-11-26 01:11:42.158688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:19.336 [2024-11-26 01:11:42.158694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:19.336 [2024-11-26 01:11:42.158700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:19.336 [2024-11-26 01:11:42.158705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:19.336 [2024-11-26 01:11:42.158710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:19.336 [2024-11-26 01:11:42.158715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:19.336 [2024-11-26 01:11:42.158720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:19.336 [2024-11-26 01:11:42.158725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:19.336 [2024-11-26 01:11:42.158730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:19.336 [2024-11-26 01:11:42.158736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:19.336 [2024-11-26 01:11:42.158741] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:19.336 [2024-11-26 01:11:42.158747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:19.336 [2024-11-26 01:11:42.158753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:19.336 [2024-11-26 01:11:42.158759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:19.336 [2024-11-26 01:11:42.158764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:19.336 [2024-11-26 01:11:42.158769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:19.336 [2024-11-26 01:11:42.158775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.336 [2024-11-26 01:11:42.158782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:19.336 [2024-11-26 01:11:42.158788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:26:19.336 [2024-11-26 01:11:42.158793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.336 [2024-11-26 01:11:42.166723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.336 [2024-11-26 01:11:42.166816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:19.336 [2024-11-26 01:11:42.166871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.897 ms 00:26:19.336 [2024-11-26 01:11:42.166890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.336 [2024-11-26 01:11:42.166966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.336 [2024-11-26 01:11:42.167035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:19.336 [2024-11-26 01:11:42.167053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:19.336 [2024-11-26 01:11:42.167068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.336 [2024-11-26 01:11:42.190831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.336 [2024-11-26 01:11:42.191113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:19.336 [2024-11-26 01:11:42.191616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.716 ms 00:26:19.336 [2024-11-26 01:11:42.191896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.336 [2024-11-26 01:11:42.191959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.336 [2024-11-26 01:11:42.192037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:19.336 [2024-11-26 01:11:42.192111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:19.336 [2024-11-26 01:11:42.192134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.336 [2024-11-26 01:11:42.192504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.336 [2024-11-26 01:11:42.192590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:19.336 [2024-11-26 01:11:42.192670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:26:19.336 [2024-11-26 01:11:42.192814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.336 [2024-11-26 01:11:42.193247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.336 [2024-11-26 01:11:42.193415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:19.336 [2024-11-26 01:11:42.193556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:26:19.336 [2024-11-26 01:11:42.193622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.336 [2024-11-26 01:11:42.203432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.336 [2024-11-26 01:11:42.203683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:19.336 [2024-11-26 01:11:42.203723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.654 ms 00:26:19.336 [2024-11-26 01:11:42.203763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.337 [2024-11-26 01:11:42.207061] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:19.337 [2024-11-26 01:11:42.207094] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:19.337 [2024-11-26 01:11:42.207108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.337 [2024-11-26 01:11:42.207116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:19.337 [2024-11-26 01:11:42.207124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:26:19.337 [2024-11-26 01:11:42.207131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.337 [2024-11-26 01:11:42.221743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.337 [2024-11-26 01:11:42.221785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:19.337 [2024-11-26 01:11:42.221806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.574 ms 00:26:19.337 [2024-11-26 01:11:42.221815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.337 [2024-11-26 01:11:42.223942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.337 [2024-11-26 01:11:42.223973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:19.337 [2024-11-26 01:11:42.223983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:26:19.337 [2024-11-26 01:11:42.223989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.337 [2024-11-26 01:11:42.225908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.337 [2024-11-26 01:11:42.225938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:19.337 [2024-11-26 01:11:42.225947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.885 ms 00:26:19.337 [2024-11-26 01:11:42.225954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.337 [2024-11-26 01:11:42.226309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.337 [2024-11-26 01:11:42.226321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:19.337 [2024-11-26 01:11:42.226330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:26:19.337 [2024-11-26 01:11:42.226338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.337 [2024-11-26 01:11:42.242279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.337 [2024-11-26 01:11:42.242329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:19.337 [2024-11-26 01:11:42.242340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.921 ms 00:26:19.337 [2024-11-26 01:11:42.242348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.337 [2024-11-26 01:11:42.249796] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:19.597 [2024-11-26 01:11:42.252300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.597 [2024-11-26 01:11:42.252329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:19.597 [2024-11-26 01:11:42.252340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.911 ms 00:26:19.597 [2024-11-26 01:11:42.252349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.597 [2024-11-26 01:11:42.252434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.597 [2024-11-26 01:11:42.252446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:19.597 [2024-11-26 01:11:42.252458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:19.597 [2024-11-26 01:11:42.252465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.597 [2024-11-26 01:11:42.252532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.597 [2024-11-26 01:11:42.252542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:19.597 [2024-11-26 01:11:42.252550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:19.597 [2024-11-26 01:11:42.252558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.597 [2024-11-26 01:11:42.252576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.597 [2024-11-26 01:11:42.252584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:19.597 [2024-11-26 01:11:42.252591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:19.597 [2024-11-26 01:11:42.252602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.597 [2024-11-26 01:11:42.252635] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:19.597 [2024-11-26 01:11:42.252644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.597 [2024-11-26 01:11:42.252652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:19.597 [2024-11-26 01:11:42.252661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:19.597 [2024-11-26 01:11:42.252669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.597 [2024-11-26 01:11:42.255904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.597 [2024-11-26 01:11:42.255933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:19.597 [2024-11-26 01:11:42.255942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.219 ms 00:26:19.597 [2024-11-26 01:11:42.255954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.597 [2024-11-26 01:11:42.256019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:19.597 [2024-11-26 01:11:42.256032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:19.597 [2024-11-26 01:11:42.256044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:19.597 [2024-11-26 01:11:42.256051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:19.597 [2024-11-26 01:11:42.257232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.788 ms, result 0 00:26:20.544  [2024-11-26T01:11:44.406Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-26T01:11:45.351Z] Copying: 23/1024 [MB] (10 MBps) [2024-11-26T01:11:46.298Z] Copying: 38/1024 [MB] (14 MBps) [2024-11-26T01:11:47.685Z] Copying: 58/1024 [MB] (20 MBps) [2024-11-26T01:11:48.631Z] Copying: 77/1024 [MB] (19 MBps) [2024-11-26T01:11:49.576Z] Copying: 98/1024 [MB] (21 MBps) [2024-11-26T01:11:50.520Z] Copying: 115/1024 [MB] (16 MBps) [2024-11-26T01:11:51.463Z] Copying: 126/1024 [MB] (11 MBps) [2024-11-26T01:11:52.408Z] Copying: 140/1024 [MB] (13 MBps) [2024-11-26T01:11:53.352Z] Copying: 150/1024 [MB] (10 MBps) [2024-11-26T01:11:54.294Z] Copying: 168/1024 [MB] (17 MBps) [2024-11-26T01:11:55.680Z] Copying: 183/1024 [MB] (15 MBps) [2024-11-26T01:11:56.625Z] Copying: 202/1024 [MB] (19 MBps) [2024-11-26T01:11:57.571Z] Copying: 215/1024 [MB] (12 MBps) [2024-11-26T01:11:58.507Z] Copying: 225/1024 [MB] (10 MBps) [2024-11-26T01:11:59.450Z] Copying: 256/1024 [MB] (30 MBps) [2024-11-26T01:12:00.397Z] Copying: 285/1024 [MB] (29 MBps) [2024-11-26T01:12:01.344Z] Copying: 296/1024 [MB] (10 MBps) [2024-11-26T01:12:02.288Z] Copying: 306/1024 [MB] (10 MBps) [2024-11-26T01:12:03.296Z] Copying: 323/1024 [MB] (16 MBps) [2024-11-26T01:12:04.686Z] Copying: 349/1024 [MB] (25 MBps) [2024-11-26T01:12:05.627Z] Copying: 365/1024 [MB] (16 MBps) [2024-11-26T01:12:06.571Z] Copying: 385/1024 [MB] (19 MBps) [2024-11-26T01:12:07.516Z] Copying: 403/1024 [MB] (18 MBps) [2024-11-26T01:12:08.460Z] Copying: 419/1024 [MB] (15 MBps) [2024-11-26T01:12:09.405Z] Copying: 432/1024 [MB] (12 MBps) [2024-11-26T01:12:10.350Z] Copying: 450/1024 [MB] (18 MBps) [2024-11-26T01:12:11.296Z] Copying: 468/1024 [MB] (17 MBps) [2024-11-26T01:12:12.686Z] Copying: 491/1024 [MB] (22 MBps) [2024-11-26T01:12:13.633Z] Copying: 504/1024 [MB] (13 MBps) [2024-11-26T01:12:14.580Z] Copying: 523/1024 [MB] (18 MBps) [2024-11-26T01:12:15.520Z] Copying: 540/1024 [MB] (17 MBps) [2024-11-26T01:12:16.465Z] Copying: 554/1024 [MB] (13 MBps) [2024-11-26T01:12:17.411Z] Copying: 569/1024 [MB] (15 MBps) [2024-11-26T01:12:18.356Z] Copying: 583/1024 [MB] (13 MBps) [2024-11-26T01:12:19.302Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-26T01:12:20.685Z] Copying: 605/1024 [MB] (11 MBps) [2024-11-26T01:12:21.618Z] Copying: 626/1024 [MB] (21 MBps) [2024-11-26T01:12:22.556Z] Copying: 680/1024 [MB] (54 MBps) [2024-11-26T01:12:23.501Z] Copying: 718/1024 [MB] (37 MBps) [2024-11-26T01:12:24.441Z] Copying: 730/1024 [MB] (12 MBps) [2024-11-26T01:12:25.383Z] Copying: 743/1024 [MB] (12 MBps) [2024-11-26T01:12:26.324Z] Copying: 758/1024 [MB] (15 MBps) [2024-11-26T01:12:27.270Z] Copying: 778/1024 [MB] (19 MBps) [2024-11-26T01:12:28.655Z] Copying: 796/1024 [MB] (18 MBps) [2024-11-26T01:12:29.598Z] Copying: 815/1024 [MB] (18 MBps) [2024-11-26T01:12:30.541Z] Copying: 833/1024 [MB] (17 MBps) [2024-11-26T01:12:31.485Z] Copying: 851/1024 [MB] (18 MBps) [2024-11-26T01:12:32.428Z] Copying: 869/1024 [MB] (17 MBps) [2024-11-26T01:12:33.372Z] Copying: 890/1024 [MB] (21 MBps) [2024-11-26T01:12:34.450Z] Copying: 907/1024 [MB] (16 MBps) [2024-11-26T01:12:35.396Z] Copying: 920/1024 [MB] (12 MBps) [2024-11-26T01:12:36.342Z] Copying: 939/1024 [MB] (18 MBps) [2024-11-26T01:12:37.284Z] Copying: 955/1024 [MB] (16 MBps) [2024-11-26T01:12:38.671Z] Copying: 970/1024 [MB] (14 MBps) [2024-11-26T01:12:39.614Z] Copying: 986/1024 [MB] (16 MBps) [2024-11-26T01:12:40.556Z] Copying: 999/1024 [MB] (12 MBps) [2024-11-26T01:12:41.500Z] Copying: 1015/1024 [MB] (15 MBps) [2024-11-26T01:12:41.759Z] Copying: 1048100/1048576 [kB] (8664 kBps) [2024-11-26T01:12:41.759Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-26 01:12:41.716151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.842 [2024-11-26 01:12:41.716225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:18.842 [2024-11-26 01:12:41.716243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:18.843 [2024-11-26 01:12:41.716258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.843 [2024-11-26 01:12:41.718892] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:18.843 [2024-11-26 01:12:41.722612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.843 [2024-11-26 01:12:41.722860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:18.843 [2024-11-26 01:12:41.722884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:27:18.843 [2024-11-26 01:12:41.722895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.843 [2024-11-26 01:12:41.734282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.843 [2024-11-26 01:12:41.734339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:18.843 [2024-11-26 01:12:41.734352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.404 ms 00:27:18.843 [2024-11-26 01:12:41.734361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.843 [2024-11-26 01:12:41.759076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.843 [2024-11-26 01:12:41.759128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:18.843 [2024-11-26 01:12:41.759143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.686 ms 00:27:18.843 [2024-11-26 01:12:41.759162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.103 [2024-11-26 01:12:41.765475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.103 [2024-11-26 01:12:41.765687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:19.103 [2024-11-26 01:12:41.765717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.274 ms 00:27:19.103 [2024-11-26 01:12:41.765726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.103 [2024-11-26 01:12:41.768776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.103 [2024-11-26 01:12:41.768957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:19.103 [2024-11-26 01:12:41.768976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.976 ms 00:27:19.103 [2024-11-26 01:12:41.768985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.103 [2024-11-26 01:12:41.773804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.103 [2024-11-26 01:12:41.773873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:19.103 [2024-11-26 01:12:41.773886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.745 ms 00:27:19.103 [2024-11-26 01:12:41.773902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.365 [2024-11-26 01:12:42.056571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.365 [2024-11-26 01:12:42.056766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:19.366 [2024-11-26 01:12:42.056787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 282.620 ms 00:27:19.366 [2024-11-26 01:12:42.056796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.366 [2024-11-26 01:12:42.060063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.366 [2024-11-26 01:12:42.060111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:19.366 [2024-11-26 01:12:42.060122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.238 ms 00:27:19.366 [2024-11-26 01:12:42.060132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.366 [2024-11-26 01:12:42.063079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.366 [2024-11-26 01:12:42.063241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:19.366 [2024-11-26 01:12:42.063259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.906 ms 00:27:19.366 [2024-11-26 01:12:42.063266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.366 [2024-11-26 01:12:42.065568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.366 [2024-11-26 01:12:42.065615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:19.366 [2024-11-26 01:12:42.065626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.223 ms 00:27:19.366 [2024-11-26 01:12:42.065634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.366 [2024-11-26 01:12:42.067927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.366 [2024-11-26 01:12:42.067974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:19.366 [2024-11-26 01:12:42.067985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:27:19.366 [2024-11-26 01:12:42.067991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.366 [2024-11-26 01:12:42.068030] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:19.366 [2024-11-26 01:12:42.068044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 102656 / 261120 wr_cnt: 1 state: open 00:27:19.366 [2024-11-26 01:12:42.068068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:19.366 [2024-11-26 01:12:42.068664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:19.367 [2024-11-26 01:12:42.068927] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:19.367 [2024-11-26 01:12:42.068936] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9de5639c-e423-4dbf-b262-b7551a48204b 00:27:19.367 [2024-11-26 01:12:42.068945] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 102656 00:27:19.367 [2024-11-26 01:12:42.068953] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 103616 00:27:19.367 [2024-11-26 01:12:42.068961] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 102656 00:27:19.367 [2024-11-26 01:12:42.068970] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:27:19.367 [2024-11-26 01:12:42.068977] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:19.367 [2024-11-26 01:12:42.068986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:19.367 [2024-11-26 01:12:42.069000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:19.367 [2024-11-26 01:12:42.069006] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:19.367 [2024-11-26 01:12:42.069013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:19.367 [2024-11-26 01:12:42.069021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.367 [2024-11-26 01:12:42.069029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:19.367 [2024-11-26 01:12:42.069040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:27:19.367 [2024-11-26 01:12:42.069051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.071333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.367 [2024-11-26 01:12:42.071365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:19.367 [2024-11-26 01:12:42.071376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:27:19.367 [2024-11-26 01:12:42.071384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.071514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.367 [2024-11-26 01:12:42.071523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:19.367 [2024-11-26 01:12:42.071532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:27:19.367 [2024-11-26 01:12:42.071545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.078920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.078965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.367 [2024-11-26 01:12:42.078976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.078984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.079043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.079052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.367 [2024-11-26 01:12:42.079061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.079069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.079127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.079138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.367 [2024-11-26 01:12:42.079146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.079154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.079181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.079190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.367 [2024-11-26 01:12:42.079198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.079206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.092219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.092272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.367 [2024-11-26 01:12:42.092284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.092302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.102234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.102281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.367 [2024-11-26 01:12:42.102300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.102309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.102354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.102364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:19.367 [2024-11-26 01:12:42.102373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.102381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.102415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.102424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:19.367 [2024-11-26 01:12:42.102435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.102444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.102517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.102527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:19.367 [2024-11-26 01:12:42.102536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.102544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.102572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.102580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:19.367 [2024-11-26 01:12:42.102592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.102599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.102640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.102650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:19.367 [2024-11-26 01:12:42.102658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.102666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.102715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:19.367 [2024-11-26 01:12:42.102726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:19.367 [2024-11-26 01:12:42.102737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:19.367 [2024-11-26 01:12:42.102746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.367 [2024-11-26 01:12:42.102902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.098 ms, result 0 00:27:19.939 00:27:19.939 00:27:19.939 01:12:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:22.486 01:12:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:22.486 [2024-11-26 01:12:45.040998] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:27:22.486 [2024-11-26 01:12:45.041122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94618 ] 00:27:22.486 [2024-11-26 01:12:45.176184] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:22.486 [2024-11-26 01:12:45.205453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.486 [2024-11-26 01:12:45.233888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.486 [2024-11-26 01:12:45.348602] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.486 [2024-11-26 01:12:45.348689] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:22.749 [2024-11-26 01:12:45.510297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.510355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:22.749 [2024-11-26 01:12:45.510370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:22.749 [2024-11-26 01:12:45.510379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.510440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.510454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.749 [2024-11-26 01:12:45.510466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:22.749 [2024-11-26 01:12:45.510477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.510497] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:22.749 [2024-11-26 01:12:45.510775] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:22.749 [2024-11-26 01:12:45.510798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.510810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.749 [2024-11-26 01:12:45.510825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:27:22.749 [2024-11-26 01:12:45.510835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.512594] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:22.749 [2024-11-26 01:12:45.516316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.516366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:22.749 [2024-11-26 01:12:45.516384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:27:22.749 [2024-11-26 01:12:45.516395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.516478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.516489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:22.749 [2024-11-26 01:12:45.516498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:27:22.749 [2024-11-26 01:12:45.516506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.524482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.524527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.749 [2024-11-26 01:12:45.524547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.929 ms 00:27:22.749 [2024-11-26 01:12:45.524555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.524661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.524673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.749 [2024-11-26 01:12:45.524684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:27:22.749 [2024-11-26 01:12:45.524692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.524747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.524756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:22.749 [2024-11-26 01:12:45.524764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:22.749 [2024-11-26 01:12:45.524781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.524807] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:22.749 [2024-11-26 01:12:45.526886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.526925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.749 [2024-11-26 01:12:45.526935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:27:22.749 [2024-11-26 01:12:45.526942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.526981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.526989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:22.749 [2024-11-26 01:12:45.527000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:22.749 [2024-11-26 01:12:45.527012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.527034] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:22.749 [2024-11-26 01:12:45.527055] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:22.749 [2024-11-26 01:12:45.527091] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:22.749 [2024-11-26 01:12:45.527111] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:22.749 [2024-11-26 01:12:45.527217] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:22.749 [2024-11-26 01:12:45.527230] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:22.749 [2024-11-26 01:12:45.527242] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:22.749 [2024-11-26 01:12:45.527257] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:22.749 [2024-11-26 01:12:45.527267] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:22.749 [2024-11-26 01:12:45.527278] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:22.749 [2024-11-26 01:12:45.527286] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:22.749 [2024-11-26 01:12:45.527294] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:22.749 [2024-11-26 01:12:45.527306] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:22.749 [2024-11-26 01:12:45.527314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.527322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:22.749 [2024-11-26 01:12:45.527330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:27:22.749 [2024-11-26 01:12:45.527340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.527422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.749 [2024-11-26 01:12:45.527431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:22.749 [2024-11-26 01:12:45.527438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:22.749 [2024-11-26 01:12:45.527449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.749 [2024-11-26 01:12:45.527546] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:22.749 [2024-11-26 01:12:45.527557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:22.749 [2024-11-26 01:12:45.527571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.749 [2024-11-26 01:12:45.527588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.749 [2024-11-26 01:12:45.527599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:22.749 [2024-11-26 01:12:45.527607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:22.749 [2024-11-26 01:12:45.527622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:22.749 [2024-11-26 01:12:45.527631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:22.749 [2024-11-26 01:12:45.527640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:22.749 [2024-11-26 01:12:45.527648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.749 [2024-11-26 01:12:45.527657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:22.749 [2024-11-26 01:12:45.527664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:22.749 [2024-11-26 01:12:45.527672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.749 [2024-11-26 01:12:45.527679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:22.749 [2024-11-26 01:12:45.527686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:22.750 [2024-11-26 01:12:45.527694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:22.750 [2024-11-26 01:12:45.527710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:22.750 [2024-11-26 01:12:45.527719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:22.750 [2024-11-26 01:12:45.527738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.750 [2024-11-26 01:12:45.527753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:22.750 [2024-11-26 01:12:45.527761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.750 [2024-11-26 01:12:45.527776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:22.750 [2024-11-26 01:12:45.527783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.750 [2024-11-26 01:12:45.527798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:22.750 [2024-11-26 01:12:45.527806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.750 [2024-11-26 01:12:45.527821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:22.750 [2024-11-26 01:12:45.527829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.750 [2024-11-26 01:12:45.527863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:22.750 [2024-11-26 01:12:45.527873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:22.750 [2024-11-26 01:12:45.527881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.750 [2024-11-26 01:12:45.527888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:22.750 [2024-11-26 01:12:45.527896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:22.750 [2024-11-26 01:12:45.527904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:22.750 [2024-11-26 01:12:45.527917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:22.750 [2024-11-26 01:12:45.527924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527931] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:22.750 [2024-11-26 01:12:45.527939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:22.750 [2024-11-26 01:12:45.527946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.750 [2024-11-26 01:12:45.527954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.750 [2024-11-26 01:12:45.527962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:22.750 [2024-11-26 01:12:45.527968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:22.750 [2024-11-26 01:12:45.527975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:22.750 [2024-11-26 01:12:45.527984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:22.750 [2024-11-26 01:12:45.527994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:22.750 [2024-11-26 01:12:45.528001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:22.750 [2024-11-26 01:12:45.528010] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:22.750 [2024-11-26 01:12:45.528019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.750 [2024-11-26 01:12:45.528028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:22.750 [2024-11-26 01:12:45.528035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:22.750 [2024-11-26 01:12:45.528042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:22.750 [2024-11-26 01:12:45.528050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:22.750 [2024-11-26 01:12:45.528057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:22.750 [2024-11-26 01:12:45.528065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:22.750 [2024-11-26 01:12:45.528071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:22.750 [2024-11-26 01:12:45.528078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:22.750 [2024-11-26 01:12:45.528085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:22.750 [2024-11-26 01:12:45.528092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:22.750 [2024-11-26 01:12:45.528100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:22.750 [2024-11-26 01:12:45.528107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:22.750 [2024-11-26 01:12:45.528116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:22.750 [2024-11-26 01:12:45.528123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:22.750 [2024-11-26 01:12:45.528131] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:22.750 [2024-11-26 01:12:45.528139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.750 [2024-11-26 01:12:45.528149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:22.750 [2024-11-26 01:12:45.528156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:22.750 [2024-11-26 01:12:45.528163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:22.750 [2024-11-26 01:12:45.528169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:22.750 [2024-11-26 01:12:45.528177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.528189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:22.750 [2024-11-26 01:12:45.528196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:27:22.750 [2024-11-26 01:12:45.528206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.750 [2024-11-26 01:12:45.542418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.542586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.750 [2024-11-26 01:12:45.542646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.169 ms 00:27:22.750 [2024-11-26 01:12:45.542670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.750 [2024-11-26 01:12:45.542769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.542791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:22.750 [2024-11-26 01:12:45.542822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:22.750 [2024-11-26 01:12:45.542854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.750 [2024-11-26 01:12:45.564941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.565191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.750 [2024-11-26 01:12:45.565308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.000 ms 00:27:22.750 [2024-11-26 01:12:45.565355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.750 [2024-11-26 01:12:45.565446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.565491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.750 [2024-11-26 01:12:45.565528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:22.750 [2024-11-26 01:12:45.565640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.750 [2024-11-26 01:12:45.566358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.566459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.750 [2024-11-26 01:12:45.566580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:27:22.750 [2024-11-26 01:12:45.566624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.750 [2024-11-26 01:12:45.566929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.566980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.750 [2024-11-26 01:12:45.567113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:27:22.750 [2024-11-26 01:12:45.567162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.750 [2024-11-26 01:12:45.576350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.576501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.750 [2024-11-26 01:12:45.576623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.932 ms 00:27:22.750 [2024-11-26 01:12:45.576659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.750 [2024-11-26 01:12:45.580497] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:22.750 [2024-11-26 01:12:45.580667] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:22.750 [2024-11-26 01:12:45.580737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.580759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:22.750 [2024-11-26 01:12:45.580780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.960 ms 00:27:22.750 [2024-11-26 01:12:45.580798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.750 [2024-11-26 01:12:45.596828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.750 [2024-11-26 01:12:45.596986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:22.750 [2024-11-26 01:12:45.597044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.965 ms 00:27:22.751 [2024-11-26 01:12:45.597066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.600247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.600404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:22.751 [2024-11-26 01:12:45.600464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.052 ms 00:27:22.751 [2024-11-26 01:12:45.600486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.603039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.603194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:22.751 [2024-11-26 01:12:45.603247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.436 ms 00:27:22.751 [2024-11-26 01:12:45.603268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.603704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.603770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:22.751 [2024-11-26 01:12:45.603880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:27:22.751 [2024-11-26 01:12:45.603947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.626980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.627168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:22.751 [2024-11-26 01:12:45.627239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.993 ms 00:27:22.751 [2024-11-26 01:12:45.627262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.635460] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:22.751 [2024-11-26 01:12:45.638577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.638712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:22.751 [2024-11-26 01:12:45.638766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.219 ms 00:27:22.751 [2024-11-26 01:12:45.638788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.638890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.638924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:22.751 [2024-11-26 01:12:45.638946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:22.751 [2024-11-26 01:12:45.639010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.640698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.640860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:22.751 [2024-11-26 01:12:45.640918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.625 ms 00:27:22.751 [2024-11-26 01:12:45.640940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.640985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.641008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:22.751 [2024-11-26 01:12:45.641027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:22.751 [2024-11-26 01:12:45.641046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.641093] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:22.751 [2024-11-26 01:12:45.641117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.641185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:22.751 [2024-11-26 01:12:45.641213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:22.751 [2024-11-26 01:12:45.641232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.646926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.647085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:22.751 [2024-11-26 01:12:45.647140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.658 ms 00:27:22.751 [2024-11-26 01:12:45.647163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.647310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.751 [2024-11-26 01:12:45.647343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:22.751 [2024-11-26 01:12:45.647365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:22.751 [2024-11-26 01:12:45.647432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.751 [2024-11-26 01:12:45.648643] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 137.892 ms, result 0 00:27:24.136  [2024-11-26T01:12:47.994Z] Copying: 1020/1048576 [kB] (1020 kBps) [2024-11-26T01:12:48.936Z] Copying: 4080/1048576 [kB] (3060 kBps) [2024-11-26T01:12:49.874Z] Copying: 17/1024 [MB] (14 MBps) [2024-11-26T01:12:51.259Z] Copying: 54/1024 [MB] (36 MBps) [2024-11-26T01:12:51.831Z] Copying: 83/1024 [MB] (28 MBps) [2024-11-26T01:12:53.224Z] Copying: 110/1024 [MB] (26 MBps) [2024-11-26T01:12:54.175Z] Copying: 134/1024 [MB] (24 MBps) [2024-11-26T01:12:55.120Z] Copying: 154/1024 [MB] (19 MBps) [2024-11-26T01:12:56.066Z] Copying: 170/1024 [MB] (16 MBps) [2024-11-26T01:12:57.011Z] Copying: 188/1024 [MB] (17 MBps) [2024-11-26T01:12:57.958Z] Copying: 217/1024 [MB] (28 MBps) [2024-11-26T01:12:58.904Z] Copying: 232/1024 [MB] (15 MBps) [2024-11-26T01:12:59.848Z] Copying: 253/1024 [MB] (20 MBps) [2024-11-26T01:13:01.234Z] Copying: 276/1024 [MB] (23 MBps) [2024-11-26T01:13:02.177Z] Copying: 306/1024 [MB] (29 MBps) [2024-11-26T01:13:03.120Z] Copying: 334/1024 [MB] (28 MBps) [2024-11-26T01:13:04.086Z] Copying: 363/1024 [MB] (29 MBps) [2024-11-26T01:13:05.026Z] Copying: 395/1024 [MB] (32 MBps) [2024-11-26T01:13:05.976Z] Copying: 424/1024 [MB] (29 MBps) [2024-11-26T01:13:06.981Z] Copying: 456/1024 [MB] (31 MBps) [2024-11-26T01:13:07.923Z] Copying: 482/1024 [MB] (26 MBps) [2024-11-26T01:13:08.869Z] Copying: 514/1024 [MB] (31 MBps) [2024-11-26T01:13:10.252Z] Copying: 544/1024 [MB] (30 MBps) [2024-11-26T01:13:11.198Z] Copying: 568/1024 [MB] (24 MBps) [2024-11-26T01:13:12.143Z] Copying: 597/1024 [MB] (28 MBps) [2024-11-26T01:13:13.088Z] Copying: 625/1024 [MB] (28 MBps) [2024-11-26T01:13:14.032Z] Copying: 656/1024 [MB] (31 MBps) [2024-11-26T01:13:14.976Z] Copying: 686/1024 [MB] (30 MBps) [2024-11-26T01:13:15.913Z] Copying: 722/1024 [MB] (35 MBps) [2024-11-26T01:13:16.856Z] Copying: 767/1024 [MB] (44 MBps) [2024-11-26T01:13:18.241Z] Copying: 794/1024 [MB] (27 MBps) [2024-11-26T01:13:19.188Z] Copying: 832/1024 [MB] (38 MBps) [2024-11-26T01:13:20.132Z] Copying: 862/1024 [MB] (29 MBps) [2024-11-26T01:13:21.067Z] Copying: 891/1024 [MB] (29 MBps) [2024-11-26T01:13:22.009Z] Copying: 920/1024 [MB] (28 MBps) [2024-11-26T01:13:22.955Z] Copying: 944/1024 [MB] (23 MBps) [2024-11-26T01:13:23.889Z] Copying: 971/1024 [MB] (26 MBps) [2024-11-26T01:13:24.149Z] Copying: 1017/1024 [MB] (46 MBps) [2024-11-26T01:13:24.149Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-26 01:13:24.041452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.041544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:01.232 [2024-11-26 01:13:24.041566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:01.232 [2024-11-26 01:13:24.041589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.041626] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:01.232 [2024-11-26 01:13:24.042318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.042348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:01.232 [2024-11-26 01:13:24.042364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:28:01.232 [2024-11-26 01:13:24.042380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.042765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.042791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:01.232 [2024-11-26 01:13:24.042805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:28:01.232 [2024-11-26 01:13:24.042819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.055951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.055983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:01.232 [2024-11-26 01:13:24.055999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.089 ms 00:28:01.232 [2024-11-26 01:13:24.056007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.062392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.062425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:01.232 [2024-11-26 01:13:24.062435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.355 ms 00:28:01.232 [2024-11-26 01:13:24.062443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.063702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.063854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:01.232 [2024-11-26 01:13:24.063870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:28:01.232 [2024-11-26 01:13:24.063877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.067363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.067397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:01.232 [2024-11-26 01:13:24.067405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.455 ms 00:28:01.232 [2024-11-26 01:13:24.067411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.068909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.068929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:01.232 [2024-11-26 01:13:24.068937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:28:01.232 [2024-11-26 01:13:24.068943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.070507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.070527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:01.232 [2024-11-26 01:13:24.070544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:28:01.232 [2024-11-26 01:13:24.070550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.072081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.072183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:01.232 [2024-11-26 01:13:24.072233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:28:01.232 [2024-11-26 01:13:24.072252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.073262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.073371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:01.232 [2024-11-26 01:13:24.073419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.978 ms 00:28:01.232 [2024-11-26 01:13:24.073436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.074763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.232 [2024-11-26 01:13:24.074904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:01.232 [2024-11-26 01:13:24.074957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:28:01.232 [2024-11-26 01:13:24.074975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.232 [2024-11-26 01:13:24.075022] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:01.232 [2024-11-26 01:13:24.075108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:01.232 [2024-11-26 01:13:24.075146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:01.232 [2024-11-26 01:13:24.075168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:01.232 [2024-11-26 01:13:24.075664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.075989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.076956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:01.233 [2024-11-26 01:13:24.077837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.077975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:01.234 [2024-11-26 01:13:24.078478] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:01.234 [2024-11-26 01:13:24.078498] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9de5639c-e423-4dbf-b262-b7551a48204b 00:28:01.234 [2024-11-26 01:13:24.078539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:01.234 [2024-11-26 01:13:24.078554] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 161984 00:28:01.234 [2024-11-26 01:13:24.078591] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160000 00:28:01.234 [2024-11-26 01:13:24.078606] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:28:01.234 [2024-11-26 01:13:24.078620] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:01.234 [2024-11-26 01:13:24.078634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:01.234 [2024-11-26 01:13:24.078650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:01.234 [2024-11-26 01:13:24.078664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:01.234 [2024-11-26 01:13:24.078701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:01.234 [2024-11-26 01:13:24.078718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.234 [2024-11-26 01:13:24.078830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:01.234 [2024-11-26 01:13:24.078907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:28:01.234 [2024-11-26 01:13:24.078928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.080353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.234 [2024-11-26 01:13:24.080442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:01.234 [2024-11-26 01:13:24.080481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:28:01.234 [2024-11-26 01:13:24.080498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.080602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.234 [2024-11-26 01:13:24.080627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:01.234 [2024-11-26 01:13:24.080675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:01.234 [2024-11-26 01:13:24.080692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.085348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.085369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:01.234 [2024-11-26 01:13:24.085377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.085383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.085422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.085434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:01.234 [2024-11-26 01:13:24.085440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.085450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.085479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.085486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:01.234 [2024-11-26 01:13:24.085492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.085497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.085509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.085514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:01.234 [2024-11-26 01:13:24.085523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.085528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.093783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.093812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:01.234 [2024-11-26 01:13:24.093821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.093827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.100442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.100605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:01.234 [2024-11-26 01:13:24.100622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.100628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.100663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.100670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.234 [2024-11-26 01:13:24.100677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.100682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.100701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.100707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.234 [2024-11-26 01:13:24.100713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.100724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.100779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.100786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.234 [2024-11-26 01:13:24.100793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.100802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.100829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.100987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:01.234 [2024-11-26 01:13:24.101012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.101026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.234 [2024-11-26 01:13:24.101093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.234 [2024-11-26 01:13:24.101111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.234 [2024-11-26 01:13:24.101126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.234 [2024-11-26 01:13:24.101139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.235 [2024-11-26 01:13:24.101184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.235 [2024-11-26 01:13:24.101230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.235 [2024-11-26 01:13:24.101245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.235 [2024-11-26 01:13:24.101253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.235 [2024-11-26 01:13:24.101353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.901 ms, result 0 00:28:01.493 00:28:01.493 00:28:01.493 01:13:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:03.395 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:03.395 01:13:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:03.395 [2024-11-26 01:13:25.996545] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:28:03.395 [2024-11-26 01:13:25.996812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95036 ] 00:28:03.395 [2024-11-26 01:13:26.129295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:03.395 [2024-11-26 01:13:26.155668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.395 [2024-11-26 01:13:26.172804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.395 [2024-11-26 01:13:26.254510] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.395 [2024-11-26 01:13:26.254716] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.655 [2024-11-26 01:13:26.396754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.396899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:03.655 [2024-11-26 01:13:26.396959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:03.655 [2024-11-26 01:13:26.396978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.397029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.397048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.655 [2024-11-26 01:13:26.397063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:28:03.655 [2024-11-26 01:13:26.397080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.397103] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:03.655 [2024-11-26 01:13:26.397329] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:03.655 [2024-11-26 01:13:26.397401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.397443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.655 [2024-11-26 01:13:26.397461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:28:03.655 [2024-11-26 01:13:26.397475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.398428] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:03.655 [2024-11-26 01:13:26.400455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.400546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:03.655 [2024-11-26 01:13:26.400594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.029 ms 00:28:03.655 [2024-11-26 01:13:26.400614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.400660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.400707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:03.655 [2024-11-26 01:13:26.400725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:03.655 [2024-11-26 01:13:26.400739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.405169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.405255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.655 [2024-11-26 01:13:26.405291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.357 ms 00:28:03.655 [2024-11-26 01:13:26.405307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.405379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.405480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.655 [2024-11-26 01:13:26.405502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:03.655 [2024-11-26 01:13:26.405516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.405566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.405693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:03.655 [2024-11-26 01:13:26.405716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:03.655 [2024-11-26 01:13:26.405730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.405756] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:03.655 [2024-11-26 01:13:26.407010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.407092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.655 [2024-11-26 01:13:26.407131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:28:03.655 [2024-11-26 01:13:26.407148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.407187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.655 [2024-11-26 01:13:26.407280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:03.655 [2024-11-26 01:13:26.407303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:03.655 [2024-11-26 01:13:26.407311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.655 [2024-11-26 01:13:26.407326] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:03.655 [2024-11-26 01:13:26.407340] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:03.655 [2024-11-26 01:13:26.407371] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:03.655 [2024-11-26 01:13:26.407385] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:03.655 [2024-11-26 01:13:26.407462] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:03.655 [2024-11-26 01:13:26.407472] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:03.655 [2024-11-26 01:13:26.407481] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:03.655 [2024-11-26 01:13:26.407488] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:03.656 [2024-11-26 01:13:26.407495] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:03.656 [2024-11-26 01:13:26.407500] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:03.656 [2024-11-26 01:13:26.407506] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:03.656 [2024-11-26 01:13:26.407511] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:03.656 [2024-11-26 01:13:26.407516] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:03.656 [2024-11-26 01:13:26.407522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.656 [2024-11-26 01:13:26.407530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:03.656 [2024-11-26 01:13:26.407539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:28:03.656 [2024-11-26 01:13:26.407545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.656 [2024-11-26 01:13:26.407612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.656 [2024-11-26 01:13:26.407618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:03.656 [2024-11-26 01:13:26.407624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:28:03.656 [2024-11-26 01:13:26.407629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.656 [2024-11-26 01:13:26.407701] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:03.656 [2024-11-26 01:13:26.407710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:03.656 [2024-11-26 01:13:26.407718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.656 [2024-11-26 01:13:26.407726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.656 [2024-11-26 01:13:26.407735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:03.656 [2024-11-26 01:13:26.407740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:03.656 [2024-11-26 01:13:26.407750] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:03.656 [2024-11-26 01:13:26.407755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:03.656 [2024-11-26 01:13:26.407760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:03.656 [2024-11-26 01:13:26.407767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.656 [2024-11-26 01:13:26.407772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:03.656 [2024-11-26 01:13:26.407777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:03.656 [2024-11-26 01:13:26.407782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.656 [2024-11-26 01:13:26.407788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:03.656 [2024-11-26 01:13:26.407795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:03.656 [2024-11-26 01:13:26.407800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.656 [2024-11-26 01:13:26.407805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:03.656 [2024-11-26 01:13:26.407810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:03.656 [2024-11-26 01:13:26.407815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.656 [2024-11-26 01:13:26.407820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:03.656 [2024-11-26 01:13:26.407825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:03.656 [2024-11-26 01:13:26.407830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.656 [2024-11-26 01:13:26.407835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:03.656 [2024-11-26 01:13:26.407928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:03.656 [2024-11-26 01:13:26.407947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.656 [2024-11-26 01:13:26.407965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:03.656 [2024-11-26 01:13:26.407979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:03.656 [2024-11-26 01:13:26.407992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.656 [2024-11-26 01:13:26.408006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:03.656 [2024-11-26 01:13:26.408046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:03.656 [2024-11-26 01:13:26.408062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.656 [2024-11-26 01:13:26.408076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:03.656 [2024-11-26 01:13:26.408090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:03.656 [2024-11-26 01:13:26.408103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.656 [2024-11-26 01:13:26.408117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:03.656 [2024-11-26 01:13:26.408154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:03.656 [2024-11-26 01:13:26.408170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.656 [2024-11-26 01:13:26.408184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:03.656 [2024-11-26 01:13:26.408198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:03.656 [2024-11-26 01:13:26.408211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.656 [2024-11-26 01:13:26.408225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:03.656 [2024-11-26 01:13:26.408240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:03.656 [2024-11-26 01:13:26.408279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.656 [2024-11-26 01:13:26.408293] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:03.656 [2024-11-26 01:13:26.408328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:03.656 [2024-11-26 01:13:26.408343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.656 [2024-11-26 01:13:26.408361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.656 [2024-11-26 01:13:26.408378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:03.656 [2024-11-26 01:13:26.408392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:03.656 [2024-11-26 01:13:26.408430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:03.656 [2024-11-26 01:13:26.408446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:03.656 [2024-11-26 01:13:26.408460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:03.656 [2024-11-26 01:13:26.408474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:03.656 [2024-11-26 01:13:26.408489] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:03.656 [2024-11-26 01:13:26.408513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.656 [2024-11-26 01:13:26.408554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:03.656 [2024-11-26 01:13:26.408599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:03.656 [2024-11-26 01:13:26.408623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:03.656 [2024-11-26 01:13:26.408645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:03.656 [2024-11-26 01:13:26.408688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:03.656 [2024-11-26 01:13:26.408705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:03.656 [2024-11-26 01:13:26.408711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:03.656 [2024-11-26 01:13:26.408717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:03.656 [2024-11-26 01:13:26.408723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:03.656 [2024-11-26 01:13:26.408729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:03.656 [2024-11-26 01:13:26.408734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:03.656 [2024-11-26 01:13:26.408740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:03.656 [2024-11-26 01:13:26.408745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:03.656 [2024-11-26 01:13:26.408751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:03.657 [2024-11-26 01:13:26.408756] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:03.657 [2024-11-26 01:13:26.408762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.657 [2024-11-26 01:13:26.408769] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:03.657 [2024-11-26 01:13:26.408774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:03.657 [2024-11-26 01:13:26.408782] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:03.657 [2024-11-26 01:13:26.408787] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:03.657 [2024-11-26 01:13:26.408794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.408800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:03.657 [2024-11-26 01:13:26.408808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:28:03.657 [2024-11-26 01:13:26.408816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.416762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.416861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:03.657 [2024-11-26 01:13:26.416904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.890 ms 00:28:03.657 [2024-11-26 01:13:26.416922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.416993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.417009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:03.657 [2024-11-26 01:13:26.417045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:28:03.657 [2024-11-26 01:13:26.417066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.433659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.433856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:03.657 [2024-11-26 01:13:26.433991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.542 ms 00:28:03.657 [2024-11-26 01:13:26.434027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.434160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.434203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:03.657 [2024-11-26 01:13:26.434246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:03.657 [2024-11-26 01:13:26.434290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.434871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.435004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:03.657 [2024-11-26 01:13:26.435084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:28:03.657 [2024-11-26 01:13:26.435120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.435663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.435811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:03.657 [2024-11-26 01:13:26.435914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:28:03.657 [2024-11-26 01:13:26.435965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.442295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.442447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:03.657 [2024-11-26 01:13:26.442528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.235 ms 00:28:03.657 [2024-11-26 01:13:26.442622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.444748] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:03.657 [2024-11-26 01:13:26.444867] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:03.657 [2024-11-26 01:13:26.444921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.444938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:03.657 [2024-11-26 01:13:26.444952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:28:03.657 [2024-11-26 01:13:26.444996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.456110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.456203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:03.657 [2024-11-26 01:13:26.456247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.063 ms 00:28:03.657 [2024-11-26 01:13:26.456263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.457780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.457879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:03.657 [2024-11-26 01:13:26.457921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:28:03.657 [2024-11-26 01:13:26.457937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.459186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.459270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:03.657 [2024-11-26 01:13:26.459307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:28:03.657 [2024-11-26 01:13:26.459328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.459590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.459677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:03.657 [2024-11-26 01:13:26.459725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:28:03.657 [2024-11-26 01:13:26.459743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.473450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.473555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:03.657 [2024-11-26 01:13:26.473599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.683 ms 00:28:03.657 [2024-11-26 01:13:26.473618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.479347] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:03.657 [2024-11-26 01:13:26.481309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.481393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:03.657 [2024-11-26 01:13:26.481444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.657 ms 00:28:03.657 [2024-11-26 01:13:26.481461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.481509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.481616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:03.657 [2024-11-26 01:13:26.481635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:03.657 [2024-11-26 01:13:26.481649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.482195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.482280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:03.657 [2024-11-26 01:13:26.482320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:28:03.657 [2024-11-26 01:13:26.482337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.482368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.482411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:03.657 [2024-11-26 01:13:26.482429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:03.657 [2024-11-26 01:13:26.482443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.482499] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:03.657 [2024-11-26 01:13:26.482522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.482630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:03.657 [2024-11-26 01:13:26.482648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:03.657 [2024-11-26 01:13:26.482665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.485677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.485765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:03.657 [2024-11-26 01:13:26.485808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.985 ms 00:28:03.657 [2024-11-26 01:13:26.485825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.657 [2024-11-26 01:13:26.485894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.657 [2024-11-26 01:13:26.485970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:03.657 [2024-11-26 01:13:26.485992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:03.657 [2024-11-26 01:13:26.486007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.658 [2024-11-26 01:13:26.486778] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 89.719 ms, result 0 00:28:05.042  [2024-11-26T01:13:28.900Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-26T01:13:29.842Z] Copying: 45/1024 [MB] (23 MBps) [2024-11-26T01:13:30.786Z] Copying: 64/1024 [MB] (19 MBps) [2024-11-26T01:13:31.729Z] Copying: 87/1024 [MB] (22 MBps) [2024-11-26T01:13:32.671Z] Copying: 109/1024 [MB] (22 MBps) [2024-11-26T01:13:34.056Z] Copying: 131/1024 [MB] (21 MBps) [2024-11-26T01:13:34.627Z] Copying: 150/1024 [MB] (18 MBps) [2024-11-26T01:13:36.012Z] Copying: 169/1024 [MB] (19 MBps) [2024-11-26T01:13:36.953Z] Copying: 179/1024 [MB] (10 MBps) [2024-11-26T01:13:37.900Z] Copying: 191/1024 [MB] (11 MBps) [2024-11-26T01:13:38.917Z] Copying: 202/1024 [MB] (11 MBps) [2024-11-26T01:13:39.860Z] Copying: 214/1024 [MB] (11 MBps) [2024-11-26T01:13:40.804Z] Copying: 224/1024 [MB] (10 MBps) [2024-11-26T01:13:41.748Z] Copying: 236/1024 [MB] (11 MBps) [2024-11-26T01:13:42.693Z] Copying: 248/1024 [MB] (11 MBps) [2024-11-26T01:13:43.639Z] Copying: 260/1024 [MB] (12 MBps) [2024-11-26T01:13:45.029Z] Copying: 271/1024 [MB] (11 MBps) [2024-11-26T01:13:45.973Z] Copying: 282/1024 [MB] (11 MBps) [2024-11-26T01:13:46.916Z] Copying: 294/1024 [MB] (11 MBps) [2024-11-26T01:13:47.861Z] Copying: 305/1024 [MB] (11 MBps) [2024-11-26T01:13:48.807Z] Copying: 316/1024 [MB] (11 MBps) [2024-11-26T01:13:49.751Z] Copying: 327/1024 [MB] (10 MBps) [2024-11-26T01:13:50.696Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-26T01:13:51.640Z] Copying: 347/1024 [MB] (10 MBps) [2024-11-26T01:13:53.029Z] Copying: 359/1024 [MB] (11 MBps) [2024-11-26T01:13:53.976Z] Copying: 369/1024 [MB] (10 MBps) [2024-11-26T01:13:54.922Z] Copying: 380/1024 [MB] (10 MBps) [2024-11-26T01:13:55.866Z] Copying: 390/1024 [MB] (10 MBps) [2024-11-26T01:13:56.810Z] Copying: 401/1024 [MB] (10 MBps) [2024-11-26T01:13:57.756Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-26T01:13:58.700Z] Copying: 425/1024 [MB] (13 MBps) [2024-11-26T01:13:59.644Z] Copying: 446/1024 [MB] (21 MBps) [2024-11-26T01:14:01.032Z] Copying: 457/1024 [MB] (10 MBps) [2024-11-26T01:14:01.977Z] Copying: 475/1024 [MB] (17 MBps) [2024-11-26T01:14:02.921Z] Copying: 488/1024 [MB] (13 MBps) [2024-11-26T01:14:03.866Z] Copying: 499/1024 [MB] (10 MBps) [2024-11-26T01:14:04.980Z] Copying: 511/1024 [MB] (12 MBps) [2024-11-26T01:14:05.924Z] Copying: 524/1024 [MB] (12 MBps) [2024-11-26T01:14:06.866Z] Copying: 547/1024 [MB] (23 MBps) [2024-11-26T01:14:07.809Z] Copying: 563/1024 [MB] (16 MBps) [2024-11-26T01:14:08.751Z] Copying: 579/1024 [MB] (15 MBps) [2024-11-26T01:14:09.704Z] Copying: 600/1024 [MB] (20 MBps) [2024-11-26T01:14:10.647Z] Copying: 615/1024 [MB] (15 MBps) [2024-11-26T01:14:12.037Z] Copying: 630/1024 [MB] (14 MBps) [2024-11-26T01:14:12.980Z] Copying: 644/1024 [MB] (13 MBps) [2024-11-26T01:14:13.924Z] Copying: 660/1024 [MB] (15 MBps) [2024-11-26T01:14:14.870Z] Copying: 672/1024 [MB] (12 MBps) [2024-11-26T01:14:15.815Z] Copying: 682/1024 [MB] (10 MBps) [2024-11-26T01:14:16.760Z] Copying: 704/1024 [MB] (22 MBps) [2024-11-26T01:14:17.704Z] Copying: 723/1024 [MB] (19 MBps) [2024-11-26T01:14:18.649Z] Copying: 745/1024 [MB] (21 MBps) [2024-11-26T01:14:20.036Z] Copying: 760/1024 [MB] (15 MBps) [2024-11-26T01:14:20.983Z] Copying: 774/1024 [MB] (13 MBps) [2024-11-26T01:14:21.928Z] Copying: 789/1024 [MB] (14 MBps) [2024-11-26T01:14:22.874Z] Copying: 809/1024 [MB] (20 MBps) [2024-11-26T01:14:23.818Z] Copying: 830/1024 [MB] (20 MBps) [2024-11-26T01:14:24.763Z] Copying: 849/1024 [MB] (18 MBps) [2024-11-26T01:14:25.708Z] Copying: 870/1024 [MB] (21 MBps) [2024-11-26T01:14:26.649Z] Copying: 890/1024 [MB] (19 MBps) [2024-11-26T01:14:28.031Z] Copying: 916/1024 [MB] (25 MBps) [2024-11-26T01:14:28.974Z] Copying: 930/1024 [MB] (14 MBps) [2024-11-26T01:14:29.913Z] Copying: 945/1024 [MB] (14 MBps) [2024-11-26T01:14:30.858Z] Copying: 971/1024 [MB] (25 MBps) [2024-11-26T01:14:31.804Z] Copying: 988/1024 [MB] (17 MBps) [2024-11-26T01:14:32.750Z] Copying: 999/1024 [MB] (10 MBps) [2024-11-26T01:14:33.696Z] Copying: 1009/1024 [MB] (10 MBps) [2024-11-26T01:14:34.270Z] Copying: 1020/1024 [MB] (10 MBps) [2024-11-26T01:14:34.270Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-26 01:14:34.044376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.044483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:11.353 [2024-11-26 01:14:34.044505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:11.353 [2024-11-26 01:14:34.044518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.044553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:11.353 [2024-11-26 01:14:34.045393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.045429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:11.353 [2024-11-26 01:14:34.045446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:29:11.353 [2024-11-26 01:14:34.045459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.046709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.046741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:11.353 [2024-11-26 01:14:34.046766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.218 ms 00:29:11.353 [2024-11-26 01:14:34.046775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.050895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.050917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:11.353 [2024-11-26 01:14:34.050930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.102 ms 00:29:11.353 [2024-11-26 01:14:34.050941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.058293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.058336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:11.353 [2024-11-26 01:14:34.058376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.333 ms 00:29:11.353 [2024-11-26 01:14:34.058385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.061352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.061410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:11.353 [2024-11-26 01:14:34.061421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.901 ms 00:29:11.353 [2024-11-26 01:14:34.061429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.066348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.066402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:11.353 [2024-11-26 01:14:34.066413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.873 ms 00:29:11.353 [2024-11-26 01:14:34.066422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.070658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.070717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:11.353 [2024-11-26 01:14:34.070735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:29:11.353 [2024-11-26 01:14:34.070744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.074120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.074183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:11.353 [2024-11-26 01:14:34.074194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.358 ms 00:29:11.353 [2024-11-26 01:14:34.074202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.077124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.077170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:11.353 [2024-11-26 01:14:34.077181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.878 ms 00:29:11.353 [2024-11-26 01:14:34.077188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.353 [2024-11-26 01:14:34.079414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.353 [2024-11-26 01:14:34.079463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:11.354 [2024-11-26 01:14:34.079473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:29:11.354 [2024-11-26 01:14:34.079481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.354 [2024-11-26 01:14:34.081892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.354 [2024-11-26 01:14:34.081938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:11.354 [2024-11-26 01:14:34.081950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.336 ms 00:29:11.354 [2024-11-26 01:14:34.081957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.354 [2024-11-26 01:14:34.081997] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:11.354 [2024-11-26 01:14:34.082014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:11.354 [2024-11-26 01:14:34.082025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:11.354 [2024-11-26 01:14:34.082034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:11.354 [2024-11-26 01:14:34.082705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:11.355 [2024-11-26 01:14:34.082878] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:11.355 [2024-11-26 01:14:34.082888] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9de5639c-e423-4dbf-b262-b7551a48204b 00:29:11.355 [2024-11-26 01:14:34.082897] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:11.355 [2024-11-26 01:14:34.082905] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:11.355 [2024-11-26 01:14:34.082913] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:11.355 [2024-11-26 01:14:34.082930] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:11.355 [2024-11-26 01:14:34.082937] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:11.355 [2024-11-26 01:14:34.082954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:11.355 [2024-11-26 01:14:34.082961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:11.355 [2024-11-26 01:14:34.082968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:11.355 [2024-11-26 01:14:34.082975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:11.355 [2024-11-26 01:14:34.082983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.355 [2024-11-26 01:14:34.083004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:11.355 [2024-11-26 01:14:34.083014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:29:11.355 [2024-11-26 01:14:34.083022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.085489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.355 [2024-11-26 01:14:34.085523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:11.355 [2024-11-26 01:14:34.085534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.449 ms 00:29:11.355 [2024-11-26 01:14:34.085549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.085675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:11.355 [2024-11-26 01:14:34.085685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:11.355 [2024-11-26 01:14:34.085694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:29:11.355 [2024-11-26 01:14:34.085702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.093347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.093400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:11.355 [2024-11-26 01:14:34.093415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.093424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.093484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.093492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:11.355 [2024-11-26 01:14:34.093504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.093512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.093576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.093588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:11.355 [2024-11-26 01:14:34.093600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.093608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.093624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.093633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:11.355 [2024-11-26 01:14:34.093641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.093652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.107470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.107695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:11.355 [2024-11-26 01:14:34.107724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.107733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.118387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.118590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:11.355 [2024-11-26 01:14:34.118609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.118618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.118668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.118678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:11.355 [2024-11-26 01:14:34.118687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.118695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.118743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.118752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:11.355 [2024-11-26 01:14:34.118761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.118769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.118876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.118892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:11.355 [2024-11-26 01:14:34.118901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.118909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.118945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.118954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:11.355 [2024-11-26 01:14:34.118962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.118970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.119011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.119020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:11.355 [2024-11-26 01:14:34.119029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.119038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.119085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:11.355 [2024-11-26 01:14:34.119095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:11.355 [2024-11-26 01:14:34.119103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:11.355 [2024-11-26 01:14:34.119111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:11.355 [2024-11-26 01:14:34.119247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.847 ms, result 0 00:29:11.617 00:29:11.617 00:29:11.617 01:14:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:14.166 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 93383 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93383 ']' 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 93383 00:29:14.166 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (93383) - No such process 00:29:14.166 Process with pid 93383 is not found 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 93383 is not found' 00:29:14.166 01:14:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:14.426 Remove shared memory files 00:29:14.426 01:14:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:14.426 01:14:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:14.426 01:14:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:14.426 01:14:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:14.426 01:14:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:14.426 01:14:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:14.426 01:14:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:14.426 ************************************ 00:29:14.426 END TEST ftl_dirty_shutdown 00:29:14.426 ************************************ 00:29:14.426 00:29:14.426 real 3m45.752s 00:29:14.426 user 3m56.699s 00:29:14.426 sys 0m23.258s 00:29:14.426 01:14:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.426 01:14:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.426 01:14:37 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:14.426 01:14:37 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:14.426 01:14:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.426 01:14:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:14.426 ************************************ 00:29:14.426 START TEST ftl_upgrade_shutdown 00:29:14.426 ************************************ 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:14.426 * Looking for test storage... 00:29:14.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.426 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:14.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.426 --rc genhtml_branch_coverage=1 00:29:14.427 --rc genhtml_function_coverage=1 00:29:14.427 --rc genhtml_legend=1 00:29:14.427 --rc geninfo_all_blocks=1 00:29:14.427 --rc geninfo_unexecuted_blocks=1 00:29:14.427 00:29:14.427 ' 00:29:14.427 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:14.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.427 --rc genhtml_branch_coverage=1 00:29:14.427 --rc genhtml_function_coverage=1 00:29:14.427 --rc genhtml_legend=1 00:29:14.427 --rc geninfo_all_blocks=1 00:29:14.427 --rc geninfo_unexecuted_blocks=1 00:29:14.427 00:29:14.427 ' 00:29:14.427 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:14.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.427 --rc genhtml_branch_coverage=1 00:29:14.427 --rc genhtml_function_coverage=1 00:29:14.427 --rc genhtml_legend=1 00:29:14.427 --rc geninfo_all_blocks=1 00:29:14.427 --rc geninfo_unexecuted_blocks=1 00:29:14.427 00:29:14.427 ' 00:29:14.427 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:14.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.427 --rc genhtml_branch_coverage=1 00:29:14.427 --rc genhtml_function_coverage=1 00:29:14.427 --rc genhtml_legend=1 00:29:14.427 --rc geninfo_all_blocks=1 00:29:14.427 --rc geninfo_unexecuted_blocks=1 00:29:14.427 00:29:14.427 ' 00:29:14.427 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:14.427 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:14.427 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:14.689 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=95825 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 95825 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95825 ']' 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.690 01:14:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.690 [2024-11-26 01:14:37.447341] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:29:14.690 [2024-11-26 01:14:37.447736] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95825 ] 00:29:14.690 [2024-11-26 01:14:37.585716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:14.952 [2024-11-26 01:14:37.616887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.952 [2024-11-26 01:14:37.646517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:15.523 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:15.784 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:15.784 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:15.784 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:15.784 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:15.784 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:15.784 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:15.784 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:15.784 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:16.046 { 00:29:16.046 "name": "basen1", 00:29:16.046 "aliases": [ 00:29:16.046 "f2a16501-0fe3-48f2-803f-559c5724781e" 00:29:16.046 ], 00:29:16.046 "product_name": "NVMe disk", 00:29:16.046 "block_size": 4096, 00:29:16.046 "num_blocks": 1310720, 00:29:16.046 "uuid": "f2a16501-0fe3-48f2-803f-559c5724781e", 00:29:16.046 "numa_id": -1, 00:29:16.046 "assigned_rate_limits": { 00:29:16.046 "rw_ios_per_sec": 0, 00:29:16.046 "rw_mbytes_per_sec": 0, 00:29:16.046 "r_mbytes_per_sec": 0, 00:29:16.046 "w_mbytes_per_sec": 0 00:29:16.046 }, 00:29:16.046 "claimed": true, 00:29:16.046 "claim_type": "read_many_write_one", 00:29:16.046 "zoned": false, 00:29:16.046 "supported_io_types": { 00:29:16.046 "read": true, 00:29:16.046 "write": true, 00:29:16.046 "unmap": true, 00:29:16.046 "flush": true, 00:29:16.046 "reset": true, 00:29:16.046 "nvme_admin": true, 00:29:16.046 "nvme_io": true, 00:29:16.046 "nvme_io_md": false, 00:29:16.046 "write_zeroes": true, 00:29:16.046 "zcopy": false, 00:29:16.046 "get_zone_info": false, 00:29:16.046 "zone_management": false, 00:29:16.046 "zone_append": false, 00:29:16.046 "compare": true, 00:29:16.046 "compare_and_write": false, 00:29:16.046 "abort": true, 00:29:16.046 "seek_hole": false, 00:29:16.046 "seek_data": false, 00:29:16.046 "copy": true, 00:29:16.046 "nvme_iov_md": false 00:29:16.046 }, 00:29:16.046 "driver_specific": { 00:29:16.046 "nvme": [ 00:29:16.046 { 00:29:16.046 "pci_address": "0000:00:11.0", 00:29:16.046 "trid": { 00:29:16.046 "trtype": "PCIe", 00:29:16.046 "traddr": "0000:00:11.0" 00:29:16.046 }, 00:29:16.046 "ctrlr_data": { 00:29:16.046 "cntlid": 0, 00:29:16.046 "vendor_id": "0x1b36", 00:29:16.046 "model_number": "QEMU NVMe Ctrl", 00:29:16.046 "serial_number": "12341", 00:29:16.046 "firmware_revision": "8.0.0", 00:29:16.046 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:16.046 "oacs": { 00:29:16.046 "security": 0, 00:29:16.046 "format": 1, 00:29:16.046 "firmware": 0, 00:29:16.046 "ns_manage": 1 00:29:16.046 }, 00:29:16.046 "multi_ctrlr": false, 00:29:16.046 "ana_reporting": false 00:29:16.046 }, 00:29:16.046 "vs": { 00:29:16.046 "nvme_version": "1.4" 00:29:16.046 }, 00:29:16.046 "ns_data": { 00:29:16.046 "id": 1, 00:29:16.046 "can_share": false 00:29:16.046 } 00:29:16.046 } 00:29:16.046 ], 00:29:16.046 "mp_policy": "active_passive" 00:29:16.046 } 00:29:16.046 } 00:29:16.046 ]' 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:16.046 01:14:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:16.307 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5d81099a-0295-4bf0-a16e-66d67035acdd 00:29:16.307 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:16.307 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5d81099a-0295-4bf0-a16e-66d67035acdd 00:29:16.568 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=9d9e8c99-67f4-4d9e-ab59-225e9f0f02b7 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 9d9e8c99-67f4-4d9e-ab59-225e9f0f02b7 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d82eb028-6c63-43d5-8dd0-0a4f40ae0297 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d82eb028-6c63-43d5-8dd0-0a4f40ae0297 ]] 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d82eb028-6c63-43d5-8dd0-0a4f40ae0297 5120 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d82eb028-6c63-43d5-8dd0-0a4f40ae0297 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d82eb028-6c63-43d5-8dd0-0a4f40ae0297 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d82eb028-6c63-43d5-8dd0-0a4f40ae0297 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:16.829 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d82eb028-6c63-43d5-8dd0-0a4f40ae0297 00:29:17.091 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:17.091 { 00:29:17.091 "name": "d82eb028-6c63-43d5-8dd0-0a4f40ae0297", 00:29:17.091 "aliases": [ 00:29:17.091 "lvs/basen1p0" 00:29:17.091 ], 00:29:17.091 "product_name": "Logical Volume", 00:29:17.091 "block_size": 4096, 00:29:17.091 "num_blocks": 5242880, 00:29:17.091 "uuid": "d82eb028-6c63-43d5-8dd0-0a4f40ae0297", 00:29:17.091 "assigned_rate_limits": { 00:29:17.091 "rw_ios_per_sec": 0, 00:29:17.091 "rw_mbytes_per_sec": 0, 00:29:17.091 "r_mbytes_per_sec": 0, 00:29:17.091 "w_mbytes_per_sec": 0 00:29:17.091 }, 00:29:17.092 "claimed": false, 00:29:17.092 "zoned": false, 00:29:17.092 "supported_io_types": { 00:29:17.092 "read": true, 00:29:17.092 "write": true, 00:29:17.092 "unmap": true, 00:29:17.092 "flush": false, 00:29:17.092 "reset": true, 00:29:17.092 "nvme_admin": false, 00:29:17.092 "nvme_io": false, 00:29:17.092 "nvme_io_md": false, 00:29:17.092 "write_zeroes": true, 00:29:17.092 "zcopy": false, 00:29:17.092 "get_zone_info": false, 00:29:17.092 "zone_management": false, 00:29:17.092 "zone_append": false, 00:29:17.092 "compare": false, 00:29:17.092 "compare_and_write": false, 00:29:17.092 "abort": false, 00:29:17.092 "seek_hole": true, 00:29:17.092 "seek_data": true, 00:29:17.092 "copy": false, 00:29:17.092 "nvme_iov_md": false 00:29:17.092 }, 00:29:17.092 "driver_specific": { 00:29:17.092 "lvol": { 00:29:17.092 "lvol_store_uuid": "9d9e8c99-67f4-4d9e-ab59-225e9f0f02b7", 00:29:17.092 "base_bdev": "basen1", 00:29:17.092 "thin_provision": true, 00:29:17.092 "num_allocated_clusters": 0, 00:29:17.092 "snapshot": false, 00:29:17.092 "clone": false, 00:29:17.092 "esnap_clone": false 00:29:17.092 } 00:29:17.092 } 00:29:17.092 } 00:29:17.092 ]' 00:29:17.092 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:17.092 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:17.092 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:17.092 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:17.092 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:17.092 01:14:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:17.092 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:17.092 01:14:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:17.092 01:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:17.665 01:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:17.665 01:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:17.665 01:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:17.665 01:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:17.665 01:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:17.665 01:14:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d82eb028-6c63-43d5-8dd0-0a4f40ae0297 -c cachen1p0 --l2p_dram_limit 2 00:29:17.928 [2024-11-26 01:14:40.684910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.684977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:17.928 [2024-11-26 01:14:40.684997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:17.928 [2024-11-26 01:14:40.685007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.928 [2024-11-26 01:14:40.685080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.685094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:17.928 [2024-11-26 01:14:40.685108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:29:17.928 [2024-11-26 01:14:40.685116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.928 [2024-11-26 01:14:40.685149] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:17.928 [2024-11-26 01:14:40.685473] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:17.928 [2024-11-26 01:14:40.685496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.685504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:17.928 [2024-11-26 01:14:40.685516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.361 ms 00:29:17.928 [2024-11-26 01:14:40.685524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.928 [2024-11-26 01:14:40.685561] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a159412a-c3ea-4047-94bd-d071d6ab8847 00:29:17.928 [2024-11-26 01:14:40.687493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.687550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:17.928 [2024-11-26 01:14:40.687561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:29:17.928 [2024-11-26 01:14:40.687572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.928 [2024-11-26 01:14:40.697211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.697261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:17.928 [2024-11-26 01:14:40.697274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.582 ms 00:29:17.928 [2024-11-26 01:14:40.697291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.928 [2024-11-26 01:14:40.697398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.697411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:17.928 [2024-11-26 01:14:40.697421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:17.928 [2024-11-26 01:14:40.697432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.928 [2024-11-26 01:14:40.697495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.697508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:17.928 [2024-11-26 01:14:40.697516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:17.928 [2024-11-26 01:14:40.697527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.928 [2024-11-26 01:14:40.697558] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:17.928 [2024-11-26 01:14:40.700012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.700053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:17.928 [2024-11-26 01:14:40.700067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.460 ms 00:29:17.928 [2024-11-26 01:14:40.700075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.928 [2024-11-26 01:14:40.700108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.700117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:17.928 [2024-11-26 01:14:40.700130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:17.928 [2024-11-26 01:14:40.700142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.928 [2024-11-26 01:14:40.700162] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:17.928 [2024-11-26 01:14:40.700321] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:17.928 [2024-11-26 01:14:40.700335] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:17.928 [2024-11-26 01:14:40.700347] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:17.928 [2024-11-26 01:14:40.700364] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:17.928 [2024-11-26 01:14:40.700373] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:17.928 [2024-11-26 01:14:40.700410] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:17.928 [2024-11-26 01:14:40.700419] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:17.928 [2024-11-26 01:14:40.700436] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:17.928 [2024-11-26 01:14:40.700444] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:17.928 [2024-11-26 01:14:40.700455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.928 [2024-11-26 01:14:40.700462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:17.928 [2024-11-26 01:14:40.700473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:29:17.928 [2024-11-26 01:14:40.700481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.929 [2024-11-26 01:14:40.700569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.929 [2024-11-26 01:14:40.700579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:17.929 [2024-11-26 01:14:40.700588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:29:17.929 [2024-11-26 01:14:40.700598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.929 [2024-11-26 01:14:40.700696] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:17.929 [2024-11-26 01:14:40.700706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:17.929 [2024-11-26 01:14:40.700718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:17.929 [2024-11-26 01:14:40.700727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.700738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:17.929 [2024-11-26 01:14:40.700746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.700756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:17.929 [2024-11-26 01:14:40.700764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:17.929 [2024-11-26 01:14:40.700773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:17.929 [2024-11-26 01:14:40.700780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.700792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:17.929 [2024-11-26 01:14:40.700801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:17.929 [2024-11-26 01:14:40.700812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.700820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:17.929 [2024-11-26 01:14:40.700830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:17.929 [2024-11-26 01:14:40.700862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.700873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:17.929 [2024-11-26 01:14:40.700880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:17.929 [2024-11-26 01:14:40.700889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.700898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:17.929 [2024-11-26 01:14:40.700908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:17.929 [2024-11-26 01:14:40.700916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:17.929 [2024-11-26 01:14:40.700925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:17.929 [2024-11-26 01:14:40.700935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:17.929 [2024-11-26 01:14:40.700946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:17.929 [2024-11-26 01:14:40.700954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:17.929 [2024-11-26 01:14:40.700964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:17.929 [2024-11-26 01:14:40.700972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:17.929 [2024-11-26 01:14:40.700984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:17.929 [2024-11-26 01:14:40.700991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:17.929 [2024-11-26 01:14:40.701001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:17.929 [2024-11-26 01:14:40.701009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:17.929 [2024-11-26 01:14:40.701019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:17.929 [2024-11-26 01:14:40.701026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.701036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:17.929 [2024-11-26 01:14:40.701042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:17.929 [2024-11-26 01:14:40.701053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.701059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:17.929 [2024-11-26 01:14:40.701068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:17.929 [2024-11-26 01:14:40.701074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.701084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:17.929 [2024-11-26 01:14:40.701091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:17.929 [2024-11-26 01:14:40.701100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.701106] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:17.929 [2024-11-26 01:14:40.701118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:17.929 [2024-11-26 01:14:40.701125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:17.929 [2024-11-26 01:14:40.701134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:17.929 [2024-11-26 01:14:40.701145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:17.929 [2024-11-26 01:14:40.701153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:17.929 [2024-11-26 01:14:40.701160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:17.929 [2024-11-26 01:14:40.701169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:17.929 [2024-11-26 01:14:40.701175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:17.929 [2024-11-26 01:14:40.701184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:17.929 [2024-11-26 01:14:40.701195] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:17.929 [2024-11-26 01:14:40.701207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:17.929 [2024-11-26 01:14:40.701227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:17.929 [2024-11-26 01:14:40.701251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:17.929 [2024-11-26 01:14:40.701263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:17.929 [2024-11-26 01:14:40.701271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:17.929 [2024-11-26 01:14:40.701280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:17.929 [2024-11-26 01:14:40.701336] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:17.929 [2024-11-26 01:14:40.701346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:17.929 [2024-11-26 01:14:40.701363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:17.929 [2024-11-26 01:14:40.701370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:17.929 [2024-11-26 01:14:40.701380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:17.929 [2024-11-26 01:14:40.701388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:17.929 [2024-11-26 01:14:40.701400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:17.929 [2024-11-26 01:14:40.701407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.761 ms 00:29:17.929 [2024-11-26 01:14:40.701417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:17.929 [2024-11-26 01:14:40.701476] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:17.929 [2024-11-26 01:14:40.701489] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:22.140 [2024-11-26 01:14:44.623061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.623153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:22.140 [2024-11-26 01:14:44.623170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3921.569 ms 00:29:22.140 [2024-11-26 01:14:44.623182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.637186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.637247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:22.140 [2024-11-26 01:14:44.637261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.883 ms 00:29:22.140 [2024-11-26 01:14:44.637286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.637370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.637383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:22.140 [2024-11-26 01:14:44.637395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:22.140 [2024-11-26 01:14:44.637410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.649681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.649735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:22.140 [2024-11-26 01:14:44.649747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.234 ms 00:29:22.140 [2024-11-26 01:14:44.649761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.649793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.649804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:22.140 [2024-11-26 01:14:44.649813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:22.140 [2024-11-26 01:14:44.649823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.650465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.650493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:22.140 [2024-11-26 01:14:44.650505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.548 ms 00:29:22.140 [2024-11-26 01:14:44.650519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.650573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.650585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:22.140 [2024-11-26 01:14:44.650594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:22.140 [2024-11-26 01:14:44.650608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.658552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.658603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:22.140 [2024-11-26 01:14:44.658614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.925 ms 00:29:22.140 [2024-11-26 01:14:44.658629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.668387] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:22.140 [2024-11-26 01:14:44.669586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.669774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:22.140 [2024-11-26 01:14:44.669798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.879 ms 00:29:22.140 [2024-11-26 01:14:44.669807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.697938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.698006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:22.140 [2024-11-26 01:14:44.698029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.089 ms 00:29:22.140 [2024-11-26 01:14:44.698040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.698195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.698210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:22.140 [2024-11-26 01:14:44.698224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:29:22.140 [2024-11-26 01:14:44.698240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.703619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.703819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:22.140 [2024-11-26 01:14:44.703868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.327 ms 00:29:22.140 [2024-11-26 01:14:44.703877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.709435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.709485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:22.140 [2024-11-26 01:14:44.709498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.422 ms 00:29:22.140 [2024-11-26 01:14:44.709506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.140 [2024-11-26 01:14:44.709876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.140 [2024-11-26 01:14:44.709888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:22.140 [2024-11-26 01:14:44.709903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.321 ms 00:29:22.140 [2024-11-26 01:14:44.709912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.141 [2024-11-26 01:14:44.753298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.141 [2024-11-26 01:14:44.753501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:22.141 [2024-11-26 01:14:44.753532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.349 ms 00:29:22.141 [2024-11-26 01:14:44.753546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.141 [2024-11-26 01:14:44.760631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.141 [2024-11-26 01:14:44.760800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:22.141 [2024-11-26 01:14:44.760825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.020 ms 00:29:22.141 [2024-11-26 01:14:44.760833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.141 [2024-11-26 01:14:44.766818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.141 [2024-11-26 01:14:44.766888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:22.141 [2024-11-26 01:14:44.766902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.852 ms 00:29:22.141 [2024-11-26 01:14:44.766909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.141 [2024-11-26 01:14:44.773138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.141 [2024-11-26 01:14:44.773193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:22.141 [2024-11-26 01:14:44.773210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.173 ms 00:29:22.141 [2024-11-26 01:14:44.773218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.141 [2024-11-26 01:14:44.773275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.141 [2024-11-26 01:14:44.773286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:22.141 [2024-11-26 01:14:44.773297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:22.141 [2024-11-26 01:14:44.773305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.141 [2024-11-26 01:14:44.773404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:22.141 [2024-11-26 01:14:44.773415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:22.141 [2024-11-26 01:14:44.773429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:29:22.141 [2024-11-26 01:14:44.773437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:22.141 [2024-11-26 01:14:44.774726] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4089.357 ms, result 0 00:29:22.141 { 00:29:22.141 "name": "ftl", 00:29:22.141 "uuid": "a159412a-c3ea-4047-94bd-d071d6ab8847" 00:29:22.141 } 00:29:22.141 01:14:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:22.141 [2024-11-26 01:14:44.997729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.141 01:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:22.402 01:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:22.663 [2024-11-26 01:14:45.434194] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:22.664 01:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:22.923 [2024-11-26 01:14:45.654676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:22.923 01:14:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:23.181 Fill FTL, iteration 1 00:29:23.181 01:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:23.181 01:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:23.181 01:14:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=95953 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 95953 /var/tmp/spdk.tgt.sock 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 95953 ']' 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:23.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:23.181 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:23.181 [2024-11-26 01:14:46.083609] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:29:23.181 [2024-11-26 01:14:46.084050] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95953 ] 00:29:23.439 [2024-11-26 01:14:46.217307] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:23.439 [2024-11-26 01:14:46.246306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.439 [2024-11-26 01:14:46.264891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.005 01:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.005 01:14:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:24.006 01:14:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:24.279 ftln1 00:29:24.279 01:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:24.279 01:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 95953 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95953 ']' 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95953 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95953 00:29:24.710 killing process with pid 95953 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95953' 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95953 00:29:24.710 01:14:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95953 00:29:24.982 01:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:24.982 01:14:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:24.982 [2024-11-26 01:14:47.795769] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:29:24.982 [2024-11-26 01:14:47.795892] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95978 ] 00:29:25.240 [2024-11-26 01:14:47.925925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:25.240 [2024-11-26 01:14:47.949860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.240 [2024-11-26 01:14:47.973326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.627  [2024-11-26T01:14:50.486Z] Copying: 184/1024 [MB] (184 MBps) [2024-11-26T01:14:51.429Z] Copying: 394/1024 [MB] (210 MBps) [2024-11-26T01:14:52.367Z] Copying: 622/1024 [MB] (228 MBps) [2024-11-26T01:14:52.936Z] Copying: 863/1024 [MB] (241 MBps) [2024-11-26T01:14:53.197Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:29:30.280 00:29:30.280 Calculate MD5 checksum, iteration 1 00:29:30.280 01:14:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:30.280 01:14:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:30.280 01:14:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:30.280 01:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:30.280 01:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:30.280 01:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:30.280 01:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:30.280 01:14:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:30.280 [2024-11-26 01:14:53.081500] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:29:30.280 [2024-11-26 01:14:53.081617] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96042 ] 00:29:30.541 [2024-11-26 01:14:53.213835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:30.541 [2024-11-26 01:14:53.237182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.541 [2024-11-26 01:14:53.262995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.928  [2024-11-26T01:14:55.106Z] Copying: 613/1024 [MB] (613 MBps) [2024-11-26T01:14:55.366Z] Copying: 1024/1024 [MB] (average 621 MBps) 00:29:32.449 00:29:32.449 01:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:32.449 01:14:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:34.363 Fill FTL, iteration 2 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a5f8cbf562cbceaa8aaca33d478eacc2 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:34.363 01:14:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:34.363 [2024-11-26 01:14:57.169976] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:29:34.363 [2024-11-26 01:14:57.170103] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96087 ] 00:29:34.621 [2024-11-26 01:14:57.301769] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:34.621 [2024-11-26 01:14:57.332049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.622 [2024-11-26 01:14:57.356678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.008  [2024-11-26T01:14:59.865Z] Copying: 180/1024 [MB] (180 MBps) [2024-11-26T01:15:00.808Z] Copying: 412/1024 [MB] (232 MBps) [2024-11-26T01:15:01.751Z] Copying: 654/1024 [MB] (242 MBps) [2024-11-26T01:15:02.322Z] Copying: 896/1024 [MB] (242 MBps) [2024-11-26T01:15:02.322Z] Copying: 1024/1024 [MB] (average 226 MBps) 00:29:39.405 00:29:39.405 Calculate MD5 checksum, iteration 2 00:29:39.405 01:15:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:39.405 01:15:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:39.405 01:15:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:39.405 01:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:39.405 01:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:39.405 01:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:39.405 01:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:39.405 01:15:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:39.667 [2024-11-26 01:15:02.317716] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:29:39.667 [2024-11-26 01:15:02.317858] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96144 ] 00:29:39.667 [2024-11-26 01:15:02.451397] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:39.667 [2024-11-26 01:15:02.476312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:39.667 [2024-11-26 01:15:02.510175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.052  [2024-11-26T01:15:04.539Z] Copying: 629/1024 [MB] (629 MBps) [2024-11-26T01:15:07.837Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:29:44.920 00:29:44.920 01:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:44.920 01:15:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:47.450 01:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:47.450 01:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=6654f54b266af58a0f3048cc2b2bdc4d 00:29:47.450 01:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:47.450 01:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:47.450 01:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:47.450 [2024-11-26 01:15:09.953108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.450 [2024-11-26 01:15:09.953148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:47.450 [2024-11-26 01:15:09.953163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:47.450 [2024-11-26 01:15:09.953171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.450 [2024-11-26 01:15:09.953189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.450 [2024-11-26 01:15:09.953195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:47.450 [2024-11-26 01:15:09.953202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:47.450 [2024-11-26 01:15:09.953208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.450 [2024-11-26 01:15:09.953223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.450 [2024-11-26 01:15:09.953229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:47.450 [2024-11-26 01:15:09.953237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:47.450 [2024-11-26 01:15:09.953243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.450 [2024-11-26 01:15:09.953295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.174 ms, result 0 00:29:47.450 true 00:29:47.450 01:15:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:47.450 { 00:29:47.450 "name": "ftl", 00:29:47.450 "properties": [ 00:29:47.450 { 00:29:47.450 "name": "superblock_version", 00:29:47.450 "value": 5, 00:29:47.450 "read-only": true 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "name": "base_device", 00:29:47.450 "bands": [ 00:29:47.450 { 00:29:47.450 "id": 0, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 1, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 2, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 3, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 4, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 5, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 6, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 7, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 8, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 9, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 10, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 11, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 12, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 13, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 14, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 15, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 16, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 17, 00:29:47.450 "state": "FREE", 00:29:47.450 "validity": 0.0 00:29:47.450 } 00:29:47.450 ], 00:29:47.450 "read-only": true 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "name": "cache_device", 00:29:47.450 "type": "bdev", 00:29:47.450 "chunks": [ 00:29:47.450 { 00:29:47.450 "id": 0, 00:29:47.450 "state": "INACTIVE", 00:29:47.450 "utilization": 0.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 1, 00:29:47.450 "state": "CLOSED", 00:29:47.450 "utilization": 1.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 2, 00:29:47.450 "state": "CLOSED", 00:29:47.450 "utilization": 1.0 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 3, 00:29:47.450 "state": "OPEN", 00:29:47.450 "utilization": 0.001953125 00:29:47.450 }, 00:29:47.450 { 00:29:47.450 "id": 4, 00:29:47.450 "state": "OPEN", 00:29:47.450 "utilization": 0.0 00:29:47.450 } 00:29:47.450 ], 00:29:47.451 "read-only": true 00:29:47.451 }, 00:29:47.451 { 00:29:47.451 "name": "verbose_mode", 00:29:47.451 "value": true, 00:29:47.451 "unit": "", 00:29:47.451 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:47.451 }, 00:29:47.451 { 00:29:47.451 "name": "prep_upgrade_on_shutdown", 00:29:47.451 "value": false, 00:29:47.451 "unit": "", 00:29:47.451 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:47.451 } 00:29:47.451 ] 00:29:47.451 } 00:29:47.451 01:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:47.709 [2024-11-26 01:15:10.393539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.709 [2024-11-26 01:15:10.393670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:47.709 [2024-11-26 01:15:10.393721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:47.709 [2024-11-26 01:15:10.393741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.709 [2024-11-26 01:15:10.393773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.709 [2024-11-26 01:15:10.393790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:47.709 [2024-11-26 01:15:10.393805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:47.709 [2024-11-26 01:15:10.393820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.709 [2024-11-26 01:15:10.393890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.709 [2024-11-26 01:15:10.393909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:47.709 [2024-11-26 01:15:10.393925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:47.709 [2024-11-26 01:15:10.393939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.709 [2024-11-26 01:15:10.393998] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.449 ms, result 0 00:29:47.709 true 00:29:47.709 01:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:47.709 01:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:47.709 01:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:47.968 01:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:47.968 01:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:47.968 01:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:47.968 [2024-11-26 01:15:10.817874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.968 [2024-11-26 01:15:10.817909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:47.968 [2024-11-26 01:15:10.817919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:47.968 [2024-11-26 01:15:10.817925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.968 [2024-11-26 01:15:10.817942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.968 [2024-11-26 01:15:10.817948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:47.968 [2024-11-26 01:15:10.817954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:47.968 [2024-11-26 01:15:10.817960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.968 [2024-11-26 01:15:10.817975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:47.968 [2024-11-26 01:15:10.817981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:47.968 [2024-11-26 01:15:10.817987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:47.968 [2024-11-26 01:15:10.817992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:47.968 [2024-11-26 01:15:10.818034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.150 ms, result 0 00:29:47.968 true 00:29:47.968 01:15:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:48.227 { 00:29:48.227 "name": "ftl", 00:29:48.227 "properties": [ 00:29:48.227 { 00:29:48.227 "name": "superblock_version", 00:29:48.227 "value": 5, 00:29:48.227 "read-only": true 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "name": "base_device", 00:29:48.227 "bands": [ 00:29:48.227 { 00:29:48.227 "id": 0, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 1, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 2, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 3, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 4, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 5, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 6, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 7, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 8, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 9, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 10, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 11, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 12, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 13, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 14, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 15, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 16, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 17, 00:29:48.227 "state": "FREE", 00:29:48.227 "validity": 0.0 00:29:48.227 } 00:29:48.227 ], 00:29:48.227 "read-only": true 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "name": "cache_device", 00:29:48.227 "type": "bdev", 00:29:48.227 "chunks": [ 00:29:48.227 { 00:29:48.227 "id": 0, 00:29:48.227 "state": "INACTIVE", 00:29:48.227 "utilization": 0.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 1, 00:29:48.227 "state": "CLOSED", 00:29:48.227 "utilization": 1.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 2, 00:29:48.227 "state": "CLOSED", 00:29:48.227 "utilization": 1.0 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 3, 00:29:48.227 "state": "OPEN", 00:29:48.227 "utilization": 0.001953125 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "id": 4, 00:29:48.227 "state": "OPEN", 00:29:48.227 "utilization": 0.0 00:29:48.227 } 00:29:48.227 ], 00:29:48.227 "read-only": true 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "name": "verbose_mode", 00:29:48.227 "value": true, 00:29:48.227 "unit": "", 00:29:48.227 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:48.227 }, 00:29:48.227 { 00:29:48.227 "name": "prep_upgrade_on_shutdown", 00:29:48.227 "value": true, 00:29:48.227 "unit": "", 00:29:48.227 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:48.227 } 00:29:48.227 ] 00:29:48.227 } 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 95825 ]] 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 95825 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 95825 ']' 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 95825 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95825 00:29:48.227 killing process with pid 95825 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95825' 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 95825 00:29:48.227 01:15:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 95825 00:29:48.227 [2024-11-26 01:15:11.137703] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:48.486 [2024-11-26 01:15:11.141238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.486 [2024-11-26 01:15:11.141340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:48.486 [2024-11-26 01:15:11.141354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:48.486 [2024-11-26 01:15:11.141361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.486 [2024-11-26 01:15:11.141382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:48.486 [2024-11-26 01:15:11.141746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.486 [2024-11-26 01:15:11.141762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:48.486 [2024-11-26 01:15:11.141769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.354 ms 00:29:48.486 [2024-11-26 01:15:11.141775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.622 [2024-11-26 01:15:18.559978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.622 [2024-11-26 01:15:18.560120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:56.622 [2024-11-26 01:15:18.560172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7418.160 ms 00:29:56.622 [2024-11-26 01:15:18.560193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.622 [2024-11-26 01:15:18.561147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.622 [2024-11-26 01:15:18.561222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:56.622 [2024-11-26 01:15:18.561265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.929 ms 00:29:56.622 [2024-11-26 01:15:18.561283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.622 [2024-11-26 01:15:18.562186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.622 [2024-11-26 01:15:18.562265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:56.622 [2024-11-26 01:15:18.562308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.873 ms 00:29:56.622 [2024-11-26 01:15:18.562328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.622 [2024-11-26 01:15:18.563775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.622 [2024-11-26 01:15:18.563884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:56.622 [2024-11-26 01:15:18.563935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.400 ms 00:29:56.622 [2024-11-26 01:15:18.563943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.622 [2024-11-26 01:15:18.565691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.622 [2024-11-26 01:15:18.565717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:56.622 [2024-11-26 01:15:18.565725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.724 ms 00:29:56.622 [2024-11-26 01:15:18.565738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.622 [2024-11-26 01:15:18.565793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.622 [2024-11-26 01:15:18.565804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:56.622 [2024-11-26 01:15:18.565810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:56.622 [2024-11-26 01:15:18.565816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.622 [2024-11-26 01:15:18.566833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.622 [2024-11-26 01:15:18.566872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:56.622 [2024-11-26 01:15:18.566879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.001 ms 00:29:56.622 [2024-11-26 01:15:18.566885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.622 [2024-11-26 01:15:18.567972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.622 [2024-11-26 01:15:18.567998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:56.623 [2024-11-26 01:15:18.568005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.062 ms 00:29:56.623 [2024-11-26 01:15:18.568010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.568922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.623 [2024-11-26 01:15:18.568946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:56.623 [2024-11-26 01:15:18.568954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.888 ms 00:29:56.623 [2024-11-26 01:15:18.568959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.569854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.623 [2024-11-26 01:15:18.569878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:56.623 [2024-11-26 01:15:18.569885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.850 ms 00:29:56.623 [2024-11-26 01:15:18.569890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.569912] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:56.623 [2024-11-26 01:15:18.569922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:56.623 [2024-11-26 01:15:18.569931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:56.623 [2024-11-26 01:15:18.569937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:56.623 [2024-11-26 01:15:18.569943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.569949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.569955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.569961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.569967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.569973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.569979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.569984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.569990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.569996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.570002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.570007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.570013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.570019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.570024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:56.623 [2024-11-26 01:15:18.570032] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:56.623 [2024-11-26 01:15:18.570038] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a159412a-c3ea-4047-94bd-d071d6ab8847 00:29:56.623 [2024-11-26 01:15:18.570044] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:56.623 [2024-11-26 01:15:18.570049] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:56.623 [2024-11-26 01:15:18.570066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:56.623 [2024-11-26 01:15:18.570072] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:56.623 [2024-11-26 01:15:18.570077] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:56.623 [2024-11-26 01:15:18.570084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:56.623 [2024-11-26 01:15:18.570090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:56.623 [2024-11-26 01:15:18.570095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:56.623 [2024-11-26 01:15:18.570100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:56.623 [2024-11-26 01:15:18.570108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.623 [2024-11-26 01:15:18.570114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:56.623 [2024-11-26 01:15:18.570120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:29:56.623 [2024-11-26 01:15:18.570126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.571376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.623 [2024-11-26 01:15:18.571471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:56.623 [2024-11-26 01:15:18.571484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.238 ms 00:29:56.623 [2024-11-26 01:15:18.571490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.571554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.623 [2024-11-26 01:15:18.571560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:56.623 [2024-11-26 01:15:18.571567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:29:56.623 [2024-11-26 01:15:18.571573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.575965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.575994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:56.623 [2024-11-26 01:15:18.576001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.576007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.576026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.576035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:56.623 [2024-11-26 01:15:18.576041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.576046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.576091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.576101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:56.623 [2024-11-26 01:15:18.576107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.576113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.576127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.576133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:56.623 [2024-11-26 01:15:18.576141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.576149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.583974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.584003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:56.623 [2024-11-26 01:15:18.584010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.584016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.590250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.590402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:56.623 [2024-11-26 01:15:18.590414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.590420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.590454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.590461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:56.623 [2024-11-26 01:15:18.590472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.590477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.590520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.590527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:56.623 [2024-11-26 01:15:18.590534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.590540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.590591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.590598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:56.623 [2024-11-26 01:15:18.590604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.590612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.590634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.590641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:56.623 [2024-11-26 01:15:18.590647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.590653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.590684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.590691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:56.623 [2024-11-26 01:15:18.590697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.590704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.590737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.623 [2024-11-26 01:15:18.590744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:56.623 [2024-11-26 01:15:18.590753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.623 [2024-11-26 01:15:18.590759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.623 [2024-11-26 01:15:18.590866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7449.564 ms, result 0 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96339 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96339 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96339 ']' 00:30:00.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.832 01:15:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:00.832 [2024-11-26 01:15:23.069212] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:30:00.833 [2024-11-26 01:15:23.069326] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96339 ] 00:30:00.833 [2024-11-26 01:15:23.200290] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:00.833 [2024-11-26 01:15:23.217040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.833 [2024-11-26 01:15:23.234043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.833 [2024-11-26 01:15:23.484382] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:00.833 [2024-11-26 01:15:23.484432] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:00.833 [2024-11-26 01:15:23.623189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.623236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:00.833 [2024-11-26 01:15:23.623249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:00.833 [2024-11-26 01:15:23.623257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.623305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.623317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:00.833 [2024-11-26 01:15:23.623325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:00.833 [2024-11-26 01:15:23.623332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.623359] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:00.833 [2024-11-26 01:15:23.623601] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:00.833 [2024-11-26 01:15:23.623616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.623624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:00.833 [2024-11-26 01:15:23.623632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:30:00.833 [2024-11-26 01:15:23.623639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.624701] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:00.833 [2024-11-26 01:15:23.627237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.627273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:00.833 [2024-11-26 01:15:23.627282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.538 ms 00:30:00.833 [2024-11-26 01:15:23.627290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.627354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.627364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:00.833 [2024-11-26 01:15:23.627372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:00.833 [2024-11-26 01:15:23.627379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.632428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.632457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:00.833 [2024-11-26 01:15:23.632466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.993 ms 00:30:00.833 [2024-11-26 01:15:23.632473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.632515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.632524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:00.833 [2024-11-26 01:15:23.632531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:00.833 [2024-11-26 01:15:23.632538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.632583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.632592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:00.833 [2024-11-26 01:15:23.632599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:00.833 [2024-11-26 01:15:23.632608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.632628] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:00.833 [2024-11-26 01:15:23.634026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.634052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:00.833 [2024-11-26 01:15:23.634080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.402 ms 00:30:00.833 [2024-11-26 01:15:23.634089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.634120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.634136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:00.833 [2024-11-26 01:15:23.634145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:00.833 [2024-11-26 01:15:23.634153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.634180] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:00.833 [2024-11-26 01:15:23.634199] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:00.833 [2024-11-26 01:15:23.634235] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:00.833 [2024-11-26 01:15:23.634258] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:00.833 [2024-11-26 01:15:23.634363] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:00.833 [2024-11-26 01:15:23.634377] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:00.833 [2024-11-26 01:15:23.634391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:00.833 [2024-11-26 01:15:23.634402] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:00.833 [2024-11-26 01:15:23.634410] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:00.833 [2024-11-26 01:15:23.634418] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:00.833 [2024-11-26 01:15:23.634425] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:00.833 [2024-11-26 01:15:23.634432] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:00.833 [2024-11-26 01:15:23.634440] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:00.833 [2024-11-26 01:15:23.634447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.634456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:00.833 [2024-11-26 01:15:23.634463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:30:00.833 [2024-11-26 01:15:23.634470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.634554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.833 [2024-11-26 01:15:23.634565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:00.833 [2024-11-26 01:15:23.634573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:00.833 [2024-11-26 01:15:23.634581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.833 [2024-11-26 01:15:23.634685] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:00.833 [2024-11-26 01:15:23.634695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:00.833 [2024-11-26 01:15:23.634705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:00.833 [2024-11-26 01:15:23.634713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.833 [2024-11-26 01:15:23.634720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:00.833 [2024-11-26 01:15:23.634727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:00.833 [2024-11-26 01:15:23.634733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:00.833 [2024-11-26 01:15:23.634740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:00.833 [2024-11-26 01:15:23.634748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:00.833 [2024-11-26 01:15:23.634754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.833 [2024-11-26 01:15:23.634760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:00.833 [2024-11-26 01:15:23.634767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:00.833 [2024-11-26 01:15:23.634773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.833 [2024-11-26 01:15:23.634779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:00.833 [2024-11-26 01:15:23.634786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:00.833 [2024-11-26 01:15:23.634796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.833 [2024-11-26 01:15:23.634811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:00.833 [2024-11-26 01:15:23.634817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:00.833 [2024-11-26 01:15:23.634823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.833 [2024-11-26 01:15:23.634830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:00.833 [2024-11-26 01:15:23.634836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:00.833 [2024-11-26 01:15:23.634861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:00.833 [2024-11-26 01:15:23.634868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:00.833 [2024-11-26 01:15:23.634874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:00.833 [2024-11-26 01:15:23.634881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:00.833 [2024-11-26 01:15:23.634887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:00.833 [2024-11-26 01:15:23.634894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:00.833 [2024-11-26 01:15:23.634900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:00.833 [2024-11-26 01:15:23.634907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:00.833 [2024-11-26 01:15:23.634913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:00.833 [2024-11-26 01:15:23.634920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:00.833 [2024-11-26 01:15:23.634929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:00.833 [2024-11-26 01:15:23.634935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:00.833 [2024-11-26 01:15:23.634941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.833 [2024-11-26 01:15:23.634948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:00.833 [2024-11-26 01:15:23.634954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:00.834 [2024-11-26 01:15:23.634960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.834 [2024-11-26 01:15:23.634967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:00.834 [2024-11-26 01:15:23.634974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:00.834 [2024-11-26 01:15:23.634980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.834 [2024-11-26 01:15:23.634986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:00.834 [2024-11-26 01:15:23.634993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:00.834 [2024-11-26 01:15:23.634999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.834 [2024-11-26 01:15:23.635005] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:00.834 [2024-11-26 01:15:23.635013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:00.834 [2024-11-26 01:15:23.635020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:00.834 [2024-11-26 01:15:23.635027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:00.834 [2024-11-26 01:15:23.635036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:00.834 [2024-11-26 01:15:23.635045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:00.834 [2024-11-26 01:15:23.635052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:00.834 [2024-11-26 01:15:23.635059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:00.834 [2024-11-26 01:15:23.635065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:00.834 [2024-11-26 01:15:23.635072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:00.834 [2024-11-26 01:15:23.635080] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:00.834 [2024-11-26 01:15:23.635088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:00.834 [2024-11-26 01:15:23.635103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:00.834 [2024-11-26 01:15:23.635124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:00.834 [2024-11-26 01:15:23.635131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:00.834 [2024-11-26 01:15:23.635138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:00.834 [2024-11-26 01:15:23.635145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:00.834 [2024-11-26 01:15:23.635204] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:00.834 [2024-11-26 01:15:23.635212] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:00.834 [2024-11-26 01:15:23.635228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:00.834 [2024-11-26 01:15:23.635235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:00.834 [2024-11-26 01:15:23.635241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:00.834 [2024-11-26 01:15:23.635248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.834 [2024-11-26 01:15:23.635255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:00.834 [2024-11-26 01:15:23.635262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.631 ms 00:30:00.834 [2024-11-26 01:15:23.635269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.834 [2024-11-26 01:15:23.635309] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:00.834 [2024-11-26 01:15:23.635320] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:05.046 [2024-11-26 01:15:27.519495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.519591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:05.046 [2024-11-26 01:15:27.519608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3884.170 ms 00:30:05.046 [2024-11-26 01:15:27.519626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.533840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.533918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:05.046 [2024-11-26 01:15:27.533933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.085 ms 00:30:05.046 [2024-11-26 01:15:27.533944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.534002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.534021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:05.046 [2024-11-26 01:15:27.534031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:05.046 [2024-11-26 01:15:27.534047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.546793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.546873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:05.046 [2024-11-26 01:15:27.546889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.647 ms 00:30:05.046 [2024-11-26 01:15:27.546898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.546946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.546956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:05.046 [2024-11-26 01:15:27.546969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:05.046 [2024-11-26 01:15:27.546983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.547545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.547578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:05.046 [2024-11-26 01:15:27.547601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:30:05.046 [2024-11-26 01:15:27.547610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.547663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.547677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:05.046 [2024-11-26 01:15:27.547685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:05.046 [2024-11-26 01:15:27.547700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.556197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.556244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:05.046 [2024-11-26 01:15:27.556255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.469 ms 00:30:05.046 [2024-11-26 01:15:27.556264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.559829] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:05.046 [2024-11-26 01:15:27.559925] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:05.046 [2024-11-26 01:15:27.559937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.559946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:05.046 [2024-11-26 01:15:27.559955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.568 ms 00:30:05.046 [2024-11-26 01:15:27.559963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.565034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.565224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:05.046 [2024-11-26 01:15:27.565247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.018 ms 00:30:05.046 [2024-11-26 01:15:27.565256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.567961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.568008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:05.046 [2024-11-26 01:15:27.568019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.631 ms 00:30:05.046 [2024-11-26 01:15:27.568027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.570488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.570533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:05.046 [2024-11-26 01:15:27.570543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.415 ms 00:30:05.046 [2024-11-26 01:15:27.570550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.570923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.570938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:05.046 [2024-11-26 01:15:27.570953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:30:05.046 [2024-11-26 01:15:27.570966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.604737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.046 [2024-11-26 01:15:27.604802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:05.046 [2024-11-26 01:15:27.604817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.749 ms 00:30:05.046 [2024-11-26 01:15:27.604827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.046 [2024-11-26 01:15:27.612897] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:05.046 [2024-11-26 01:15:27.613789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.613828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:05.047 [2024-11-26 01:15:27.613870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.881 ms 00:30:05.047 [2024-11-26 01:15:27.613880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.613962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.613974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:05.047 [2024-11-26 01:15:27.613984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:05.047 [2024-11-26 01:15:27.613992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.614040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.614054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:05.047 [2024-11-26 01:15:27.614092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:05.047 [2024-11-26 01:15:27.614104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.614135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.614149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:05.047 [2024-11-26 01:15:27.614162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:05.047 [2024-11-26 01:15:27.614174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.614223] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:05.047 [2024-11-26 01:15:27.614238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.614260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:05.047 [2024-11-26 01:15:27.614280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:05.047 [2024-11-26 01:15:27.614293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.619271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.619466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:05.047 [2024-11-26 01:15:27.619486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.936 ms 00:30:05.047 [2024-11-26 01:15:27.619494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.619575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.619585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:05.047 [2024-11-26 01:15:27.619594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:30:05.047 [2024-11-26 01:15:27.619605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.620670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3997.044 ms, result 0 00:30:05.047 [2024-11-26 01:15:27.634307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.047 [2024-11-26 01:15:27.650326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:05.047 [2024-11-26 01:15:27.658443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:05.047 01:15:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.047 01:15:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:05.047 01:15:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:05.047 01:15:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:05.047 01:15:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:05.047 [2024-11-26 01:15:27.902595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.902660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:05.047 [2024-11-26 01:15:27.902675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:05.047 [2024-11-26 01:15:27.902685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.902711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.902720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:05.047 [2024-11-26 01:15:27.902732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:05.047 [2024-11-26 01:15:27.902741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.902763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.047 [2024-11-26 01:15:27.902772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:05.047 [2024-11-26 01:15:27.902781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:05.047 [2024-11-26 01:15:27.902789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.047 [2024-11-26 01:15:27.902877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.253 ms, result 0 00:30:05.047 true 00:30:05.047 01:15:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:05.352 { 00:30:05.352 "name": "ftl", 00:30:05.352 "properties": [ 00:30:05.352 { 00:30:05.352 "name": "superblock_version", 00:30:05.352 "value": 5, 00:30:05.352 "read-only": true 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "name": "base_device", 00:30:05.352 "bands": [ 00:30:05.352 { 00:30:05.352 "id": 0, 00:30:05.352 "state": "CLOSED", 00:30:05.352 "validity": 1.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 1, 00:30:05.352 "state": "CLOSED", 00:30:05.352 "validity": 1.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 2, 00:30:05.352 "state": "CLOSED", 00:30:05.352 "validity": 0.007843137254901933 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 3, 00:30:05.352 "state": "FREE", 00:30:05.352 "validity": 0.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 4, 00:30:05.352 "state": "FREE", 00:30:05.352 "validity": 0.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 5, 00:30:05.352 "state": "FREE", 00:30:05.352 "validity": 0.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 6, 00:30:05.352 "state": "FREE", 00:30:05.352 "validity": 0.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 7, 00:30:05.352 "state": "FREE", 00:30:05.352 "validity": 0.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 8, 00:30:05.352 "state": "FREE", 00:30:05.352 "validity": 0.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 9, 00:30:05.352 "state": "FREE", 00:30:05.352 "validity": 0.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 10, 00:30:05.352 "state": "FREE", 00:30:05.352 "validity": 0.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 11, 00:30:05.352 "state": "FREE", 00:30:05.352 "validity": 0.0 00:30:05.352 }, 00:30:05.352 { 00:30:05.352 "id": 12, 00:30:05.353 "state": "FREE", 00:30:05.353 "validity": 0.0 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "id": 13, 00:30:05.353 "state": "FREE", 00:30:05.353 "validity": 0.0 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "id": 14, 00:30:05.353 "state": "FREE", 00:30:05.353 "validity": 0.0 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "id": 15, 00:30:05.353 "state": "FREE", 00:30:05.353 "validity": 0.0 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "id": 16, 00:30:05.353 "state": "FREE", 00:30:05.353 "validity": 0.0 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "id": 17, 00:30:05.353 "state": "FREE", 00:30:05.353 "validity": 0.0 00:30:05.353 } 00:30:05.353 ], 00:30:05.353 "read-only": true 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "name": "cache_device", 00:30:05.353 "type": "bdev", 00:30:05.353 "chunks": [ 00:30:05.353 { 00:30:05.353 "id": 0, 00:30:05.353 "state": "INACTIVE", 00:30:05.353 "utilization": 0.0 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "id": 1, 00:30:05.353 "state": "OPEN", 00:30:05.353 "utilization": 0.0 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "id": 2, 00:30:05.353 "state": "OPEN", 00:30:05.353 "utilization": 0.0 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "id": 3, 00:30:05.353 "state": "FREE", 00:30:05.353 "utilization": 0.0 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "id": 4, 00:30:05.353 "state": "FREE", 00:30:05.353 "utilization": 0.0 00:30:05.353 } 00:30:05.353 ], 00:30:05.353 "read-only": true 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "name": "verbose_mode", 00:30:05.353 "value": true, 00:30:05.353 "unit": "", 00:30:05.353 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:05.353 }, 00:30:05.353 { 00:30:05.353 "name": "prep_upgrade_on_shutdown", 00:30:05.353 "value": false, 00:30:05.353 "unit": "", 00:30:05.353 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:05.353 } 00:30:05.353 ] 00:30:05.353 } 00:30:05.353 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:05.353 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:05.353 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:05.643 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:05.643 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:05.643 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:05.643 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:05.643 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:05.905 Validate MD5 checksum, iteration 1 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:05.905 01:15:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:05.905 [2024-11-26 01:15:28.644488] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:30:05.905 [2024-11-26 01:15:28.644628] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96409 ] 00:30:05.905 [2024-11-26 01:15:28.780445] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:05.905 [2024-11-26 01:15:28.812485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.166 [2024-11-26 01:15:28.840959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.570  [2024-11-26T01:15:31.429Z] Copying: 458/1024 [MB] (458 MBps) [2024-11-26T01:15:31.429Z] Copying: 950/1024 [MB] (492 MBps) [2024-11-26T01:15:31.998Z] Copying: 1024/1024 [MB] (average 482 MBps) 00:30:09.081 00:30:09.081 01:15:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:09.081 01:15:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:11.624 Validate MD5 checksum, iteration 2 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a5f8cbf562cbceaa8aaca33d478eacc2 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a5f8cbf562cbceaa8aaca33d478eacc2 != \a\5\f\8\c\b\f\5\6\2\c\b\c\e\a\a\8\a\a\c\a\3\3\d\4\7\8\e\a\c\c\2 ]] 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:11.624 01:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:11.624 [2024-11-26 01:15:34.162166] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:30:11.624 [2024-11-26 01:15:34.162383] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96473 ] 00:30:11.624 [2024-11-26 01:15:34.293232] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:11.624 [2024-11-26 01:15:34.323461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.624 [2024-11-26 01:15:34.342167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.008  [2024-11-26T01:15:36.493Z] Copying: 621/1024 [MB] (621 MBps) [2024-11-26T01:15:37.061Z] Copying: 1024/1024 [MB] (average 606 MBps) 00:30:14.144 00:30:14.144 01:15:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:14.144 01:15:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:16.057 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:16.057 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6654f54b266af58a0f3048cc2b2bdc4d 00:30:16.057 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6654f54b266af58a0f3048cc2b2bdc4d != \6\6\5\4\f\5\4\b\2\6\6\a\f\5\8\a\0\f\3\0\4\8\c\c\2\b\2\b\d\c\4\d ]] 00:30:16.057 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:16.057 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:16.057 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:16.057 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 96339 ]] 00:30:16.057 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 96339 00:30:16.057 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=96524 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 96524 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 96524 ']' 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.058 01:15:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:16.058 [2024-11-26 01:15:38.557623] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:30:16.058 [2024-11-26 01:15:38.557712] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96524 ] 00:30:16.058 [2024-11-26 01:15:38.683566] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:16.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 96339 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:16.058 [2024-11-26 01:15:38.711615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.058 [2024-11-26 01:15:38.731701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.319 [2024-11-26 01:15:39.041105] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:16.319 [2024-11-26 01:15:39.041437] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:16.319 [2024-11-26 01:15:39.190337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.190401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:16.319 [2024-11-26 01:15:39.190416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:16.319 [2024-11-26 01:15:39.190425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.190491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.190505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:16.319 [2024-11-26 01:15:39.190514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:30:16.319 [2024-11-26 01:15:39.190521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.190548] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:16.319 [2024-11-26 01:15:39.190832] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:16.319 [2024-11-26 01:15:39.190882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.190891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:16.319 [2024-11-26 01:15:39.190900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:30:16.319 [2024-11-26 01:15:39.190915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.191278] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:16.319 [2024-11-26 01:15:39.196547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.196771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:16.319 [2024-11-26 01:15:39.196793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.268 ms 00:30:16.319 [2024-11-26 01:15:39.196803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.198322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.198370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:16.319 [2024-11-26 01:15:39.198384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:30:16.319 [2024-11-26 01:15:39.198392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.198681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.198694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:16.319 [2024-11-26 01:15:39.198708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.236 ms 00:30:16.319 [2024-11-26 01:15:39.198716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.198753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.198762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:16.319 [2024-11-26 01:15:39.198771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:16.319 [2024-11-26 01:15:39.198779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.198811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.198827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:16.319 [2024-11-26 01:15:39.198835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:16.319 [2024-11-26 01:15:39.198862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.198884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:16.319 [2024-11-26 01:15:39.200164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.200205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:16.319 [2024-11-26 01:15:39.200217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.285 ms 00:30:16.319 [2024-11-26 01:15:39.200225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.200266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.200276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:16.319 [2024-11-26 01:15:39.200286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:16.319 [2024-11-26 01:15:39.200299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.200338] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:16.319 [2024-11-26 01:15:39.200360] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:16.319 [2024-11-26 01:15:39.200400] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:16.319 [2024-11-26 01:15:39.200423] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:16.319 [2024-11-26 01:15:39.200531] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:16.319 [2024-11-26 01:15:39.200543] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:16.319 [2024-11-26 01:15:39.200555] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:16.319 [2024-11-26 01:15:39.200567] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:16.319 [2024-11-26 01:15:39.200581] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:16.319 [2024-11-26 01:15:39.200589] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:16.319 [2024-11-26 01:15:39.200597] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:16.319 [2024-11-26 01:15:39.200605] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:16.319 [2024-11-26 01:15:39.200613] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:16.319 [2024-11-26 01:15:39.200621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.200631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:16.319 [2024-11-26 01:15:39.200639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:30:16.319 [2024-11-26 01:15:39.200650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.200735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.319 [2024-11-26 01:15:39.200746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:16.319 [2024-11-26 01:15:39.200754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:16.319 [2024-11-26 01:15:39.200761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.319 [2024-11-26 01:15:39.200886] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:16.319 [2024-11-26 01:15:39.200898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:16.319 [2024-11-26 01:15:39.200910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:16.319 [2024-11-26 01:15:39.200918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.200926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:16.319 [2024-11-26 01:15:39.200933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.200940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:16.319 [2024-11-26 01:15:39.200947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:16.319 [2024-11-26 01:15:39.200955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:16.319 [2024-11-26 01:15:39.200962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.200974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:16.319 [2024-11-26 01:15:39.200982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:16.319 [2024-11-26 01:15:39.200989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.201001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:16.319 [2024-11-26 01:15:39.201017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:16.319 [2024-11-26 01:15:39.201024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.201031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:16.319 [2024-11-26 01:15:39.201039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:16.319 [2024-11-26 01:15:39.201046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.201054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:16.319 [2024-11-26 01:15:39.201061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:16.319 [2024-11-26 01:15:39.201068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.319 [2024-11-26 01:15:39.201075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:16.319 [2024-11-26 01:15:39.201083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:16.319 [2024-11-26 01:15:39.201089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.319 [2024-11-26 01:15:39.201096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:16.319 [2024-11-26 01:15:39.201103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:16.319 [2024-11-26 01:15:39.201110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.319 [2024-11-26 01:15:39.201117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:16.319 [2024-11-26 01:15:39.201211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:16.319 [2024-11-26 01:15:39.201218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:16.319 [2024-11-26 01:15:39.201225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:16.319 [2024-11-26 01:15:39.201232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:16.319 [2024-11-26 01:15:39.201239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.201245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:16.319 [2024-11-26 01:15:39.201252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:16.319 [2024-11-26 01:15:39.201259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.201266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:16.319 [2024-11-26 01:15:39.201273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:16.319 [2024-11-26 01:15:39.201280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.201287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:16.319 [2024-11-26 01:15:39.201294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:16.319 [2024-11-26 01:15:39.201300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.201306] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:16.319 [2024-11-26 01:15:39.201314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:16.319 [2024-11-26 01:15:39.201324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:16.319 [2024-11-26 01:15:39.201334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:16.319 [2024-11-26 01:15:39.201342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:16.319 [2024-11-26 01:15:39.201349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:16.319 [2024-11-26 01:15:39.201356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:16.319 [2024-11-26 01:15:39.201364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:16.319 [2024-11-26 01:15:39.201371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:16.319 [2024-11-26 01:15:39.201378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:16.319 [2024-11-26 01:15:39.201386] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:16.319 [2024-11-26 01:15:39.201401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.319 [2024-11-26 01:15:39.201409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:16.319 [2024-11-26 01:15:39.201416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:16.319 [2024-11-26 01:15:39.201423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:16.320 [2024-11-26 01:15:39.201431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:16.320 [2024-11-26 01:15:39.201438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:16.320 [2024-11-26 01:15:39.201444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:16.320 [2024-11-26 01:15:39.201454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:16.320 [2024-11-26 01:15:39.201461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:16.320 [2024-11-26 01:15:39.201468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:16.320 [2024-11-26 01:15:39.201475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:16.320 [2024-11-26 01:15:39.201482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:16.320 [2024-11-26 01:15:39.201489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:16.320 [2024-11-26 01:15:39.201496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:16.320 [2024-11-26 01:15:39.201503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:16.320 [2024-11-26 01:15:39.201510] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:16.320 [2024-11-26 01:15:39.201518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:16.320 [2024-11-26 01:15:39.201528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:16.320 [2024-11-26 01:15:39.201536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:16.320 [2024-11-26 01:15:39.201543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:16.320 [2024-11-26 01:15:39.201549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:16.320 [2024-11-26 01:15:39.201557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.320 [2024-11-26 01:15:39.201564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:16.320 [2024-11-26 01:15:39.201575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.764 ms 00:30:16.320 [2024-11-26 01:15:39.201584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.320 [2024-11-26 01:15:39.213671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.320 [2024-11-26 01:15:39.213721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:16.320 [2024-11-26 01:15:39.213735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.035 ms 00:30:16.320 [2024-11-26 01:15:39.213748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.320 [2024-11-26 01:15:39.213790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.320 [2024-11-26 01:15:39.213800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:16.320 [2024-11-26 01:15:39.213813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:16.320 [2024-11-26 01:15:39.213821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.320 [2024-11-26 01:15:39.226946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.320 [2024-11-26 01:15:39.226988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:16.320 [2024-11-26 01:15:39.226999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.039 ms 00:30:16.320 [2024-11-26 01:15:39.227007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.320 [2024-11-26 01:15:39.227049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.320 [2024-11-26 01:15:39.227062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:16.320 [2024-11-26 01:15:39.227074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:16.320 [2024-11-26 01:15:39.227081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.320 [2024-11-26 01:15:39.227180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.320 [2024-11-26 01:15:39.227197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:16.320 [2024-11-26 01:15:39.227207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:30:16.320 [2024-11-26 01:15:39.227216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.320 [2024-11-26 01:15:39.227262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.320 [2024-11-26 01:15:39.227272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:16.320 [2024-11-26 01:15:39.227285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:16.320 [2024-11-26 01:15:39.227293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.235782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.235828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:16.581 [2024-11-26 01:15:39.235839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.465 ms 00:30:16.581 [2024-11-26 01:15:39.235871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.235976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.235990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:16.581 [2024-11-26 01:15:39.235999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:16.581 [2024-11-26 01:15:39.236008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.255186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.255494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:16.581 [2024-11-26 01:15:39.255535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.154 ms 00:30:16.581 [2024-11-26 01:15:39.255553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.258162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.258220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:16.581 [2024-11-26 01:15:39.258241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.618 ms 00:30:16.581 [2024-11-26 01:15:39.258256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.282621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.282877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:16.581 [2024-11-26 01:15:39.282910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.263 ms 00:30:16.581 [2024-11-26 01:15:39.282920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.283057] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:16.581 [2024-11-26 01:15:39.283154] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:16.581 [2024-11-26 01:15:39.283240] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:16.581 [2024-11-26 01:15:39.283329] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:16.581 [2024-11-26 01:15:39.283340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.283352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:16.581 [2024-11-26 01:15:39.283362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.372 ms 00:30:16.581 [2024-11-26 01:15:39.283372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.283469] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:16.581 [2024-11-26 01:15:39.283484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.283493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:16.581 [2024-11-26 01:15:39.283506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:16.581 [2024-11-26 01:15:39.283516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.286873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.286906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:16.581 [2024-11-26 01:15:39.286919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.332 ms 00:30:16.581 [2024-11-26 01:15:39.286927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.287807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.287868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:16.581 [2024-11-26 01:15:39.287881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:16.581 [2024-11-26 01:15:39.287890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.581 [2024-11-26 01:15:39.287969] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:16.581 [2024-11-26 01:15:39.288127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.581 [2024-11-26 01:15:39.288143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:16.581 [2024-11-26 01:15:39.288152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:30:16.581 [2024-11-26 01:15:39.288160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.150 [2024-11-26 01:15:39.949132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.150 [2024-11-26 01:15:39.949197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:17.150 [2024-11-26 01:15:39.949222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 660.659 ms 00:30:17.150 [2024-11-26 01:15:39.949234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.150 [2024-11-26 01:15:39.950700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.150 [2024-11-26 01:15:39.950739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:17.150 [2024-11-26 01:15:39.950749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.147 ms 00:30:17.150 [2024-11-26 01:15:39.950757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.150 [2024-11-26 01:15:39.951166] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:17.150 [2024-11-26 01:15:39.951195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.150 [2024-11-26 01:15:39.951204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:17.150 [2024-11-26 01:15:39.951213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:30:17.150 [2024-11-26 01:15:39.951221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.150 [2024-11-26 01:15:39.951254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.150 [2024-11-26 01:15:39.951271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:17.150 [2024-11-26 01:15:39.951280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:17.150 [2024-11-26 01:15:39.951287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.150 [2024-11-26 01:15:39.951321] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 663.350 ms, result 0 00:30:17.150 [2024-11-26 01:15:39.951372] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:17.150 [2024-11-26 01:15:39.951444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.150 [2024-11-26 01:15:39.951460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:17.150 [2024-11-26 01:15:39.951468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:30:17.150 [2024-11-26 01:15:39.951475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.720 [2024-11-26 01:15:40.559868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.720 [2024-11-26 01:15:40.560191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:17.720 [2024-11-26 01:15:40.560270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 608.035 ms 00:30:17.720 [2024-11-26 01:15:40.560295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.720 [2024-11-26 01:15:40.562273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.720 [2024-11-26 01:15:40.562323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:17.720 [2024-11-26 01:15:40.562335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.434 ms 00:30:17.720 [2024-11-26 01:15:40.562343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.720 [2024-11-26 01:15:40.562875] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:17.720 [2024-11-26 01:15:40.562915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.720 [2024-11-26 01:15:40.562925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:17.720 [2024-11-26 01:15:40.562936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:30:17.720 [2024-11-26 01:15:40.562945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.720 [2024-11-26 01:15:40.562982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.720 [2024-11-26 01:15:40.562992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:17.720 [2024-11-26 01:15:40.563001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:17.720 [2024-11-26 01:15:40.563009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.720 [2024-11-26 01:15:40.563048] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 611.678 ms, result 0 00:30:17.720 [2024-11-26 01:15:40.563095] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:17.720 [2024-11-26 01:15:40.563106] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:17.720 [2024-11-26 01:15:40.563117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.720 [2024-11-26 01:15:40.563125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:17.720 [2024-11-26 01:15:40.563139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1275.162 ms 00:30:17.720 [2024-11-26 01:15:40.563147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.720 [2024-11-26 01:15:40.563180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.720 [2024-11-26 01:15:40.563188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:17.721 [2024-11-26 01:15:40.563197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:17.721 [2024-11-26 01:15:40.563205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.721 [2024-11-26 01:15:40.572389] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:17.721 [2024-11-26 01:15:40.572548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.721 [2024-11-26 01:15:40.572561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:17.721 [2024-11-26 01:15:40.572572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.327 ms 00:30:17.721 [2024-11-26 01:15:40.572581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.721 [2024-11-26 01:15:40.573333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.721 [2024-11-26 01:15:40.573360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:17.721 [2024-11-26 01:15:40.573376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.669 ms 00:30:17.721 [2024-11-26 01:15:40.573384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.721 [2024-11-26 01:15:40.575675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.721 [2024-11-26 01:15:40.575854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:17.721 [2024-11-26 01:15:40.575873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.269 ms 00:30:17.721 [2024-11-26 01:15:40.575881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.721 [2024-11-26 01:15:40.575946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.721 [2024-11-26 01:15:40.575956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:17.721 [2024-11-26 01:15:40.575965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:17.721 [2024-11-26 01:15:40.575978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.721 [2024-11-26 01:15:40.576094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.721 [2024-11-26 01:15:40.576108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:17.721 [2024-11-26 01:15:40.576117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:17.721 [2024-11-26 01:15:40.576125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.721 [2024-11-26 01:15:40.576148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.721 [2024-11-26 01:15:40.576156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:17.721 [2024-11-26 01:15:40.576168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:17.721 [2024-11-26 01:15:40.576176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.721 [2024-11-26 01:15:40.576211] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:17.721 [2024-11-26 01:15:40.576221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.721 [2024-11-26 01:15:40.576239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:17.721 [2024-11-26 01:15:40.576251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:17.721 [2024-11-26 01:15:40.576259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.721 [2024-11-26 01:15:40.576314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.721 [2024-11-26 01:15:40.576324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:17.721 [2024-11-26 01:15:40.576333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:17.721 [2024-11-26 01:15:40.576340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.721 [2024-11-26 01:15:40.577474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1386.657 ms, result 0 00:30:17.721 [2024-11-26 01:15:40.593158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.721 [2024-11-26 01:15:40.609166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:17.721 [2024-11-26 01:15:40.617340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:18.293 Validate MD5 checksum, iteration 1 00:30:18.293 01:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.293 01:15:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:18.293 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:18.293 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:18.293 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:18.293 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:18.294 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:18.294 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:18.294 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:18.294 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:18.294 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:18.294 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:18.294 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:18.294 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:18.294 01:15:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:18.294 [2024-11-26 01:15:41.200746] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:30:18.294 [2024-11-26 01:15:41.201020] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96558 ] 00:30:18.558 [2024-11-26 01:15:41.332773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:18.558 [2024-11-26 01:15:41.359715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:18.559 [2024-11-26 01:15:41.376389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.982  [2024-11-26T01:15:43.472Z] Copying: 682/1024 [MB] (682 MBps) [2024-11-26T01:15:44.043Z] Copying: 1024/1024 [MB] (average 682 MBps) 00:30:21.126 00:30:21.126 01:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:21.126 01:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a5f8cbf562cbceaa8aaca33d478eacc2 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a5f8cbf562cbceaa8aaca33d478eacc2 != \a\5\f\8\c\b\f\5\6\2\c\b\c\e\a\a\8\a\a\c\a\3\3\d\4\7\8\e\a\c\c\2 ]] 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:23.042 Validate MD5 checksum, iteration 2 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:23.042 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:23.043 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:23.043 01:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:23.043 [2024-11-26 01:15:45.914897] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:30:23.043 [2024-11-26 01:15:45.915140] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96616 ] 00:30:23.304 [2024-11-26 01:15:46.048157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:23.304 [2024-11-26 01:15:46.075030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.304 [2024-11-26 01:15:46.103466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.681  [2024-11-26T01:15:47.858Z] Copying: 771/1024 [MB] (771 MBps) [2024-11-26T01:15:48.428Z] Copying: 1024/1024 [MB] (average 727 MBps) 00:30:25.511 00:30:25.511 01:15:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:25.511 01:15:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:27.423 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:27.423 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=6654f54b266af58a0f3048cc2b2bdc4d 00:30:27.423 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 6654f54b266af58a0f3048cc2b2bdc4d != \6\6\5\4\f\5\4\b\2\6\6\a\f\5\8\a\0\f\3\0\4\8\c\c\2\b\2\b\d\c\4\d ]] 00:30:27.423 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:27.423 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:27.423 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:27.423 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:27.423 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:27.423 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 96524 ]] 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 96524 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 96524 ']' 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 96524 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96524 00:30:27.424 killing process with pid 96524 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96524' 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 96524 00:30:27.424 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 96524 00:30:27.685 [2024-11-26 01:15:50.342074] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:27.685 [2024-11-26 01:15:50.347116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.347150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:27.685 [2024-11-26 01:15:50.347160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:27.685 [2024-11-26 01:15:50.347167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.347184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:27.685 [2024-11-26 01:15:50.347550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.347565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:27.685 [2024-11-26 01:15:50.347573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.356 ms 00:30:27.685 [2024-11-26 01:15:50.347579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.347755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.347763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:27.685 [2024-11-26 01:15:50.347773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.161 ms 00:30:27.685 [2024-11-26 01:15:50.347779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.348810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.348833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:27.685 [2024-11-26 01:15:50.348974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.018 ms 00:30:27.685 [2024-11-26 01:15:50.348990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.349875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.349888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:27.685 [2024-11-26 01:15:50.349896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.857 ms 00:30:27.685 [2024-11-26 01:15:50.349903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.351183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.351213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:27.685 [2024-11-26 01:15:50.351225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.237 ms 00:30:27.685 [2024-11-26 01:15:50.351231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.352411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.352438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:27.685 [2024-11-26 01:15:50.352445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.154 ms 00:30:27.685 [2024-11-26 01:15:50.352451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.352509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.352516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:27.685 [2024-11-26 01:15:50.352522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:27.685 [2024-11-26 01:15:50.352532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.353743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.353769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:27.685 [2024-11-26 01:15:50.353776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.199 ms 00:30:27.685 [2024-11-26 01:15:50.353781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.355077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.355102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:27.685 [2024-11-26 01:15:50.355109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.272 ms 00:30:27.685 [2024-11-26 01:15:50.355115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.356061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.685 [2024-11-26 01:15:50.356087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:27.685 [2024-11-26 01:15:50.356094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.921 ms 00:30:27.685 [2024-11-26 01:15:50.356100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.685 [2024-11-26 01:15:50.357163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.686 [2024-11-26 01:15:50.357259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:27.686 [2024-11-26 01:15:50.357270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.018 ms 00:30:27.686 [2024-11-26 01:15:50.357275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.357298] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:27.686 [2024-11-26 01:15:50.357308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.686 [2024-11-26 01:15:50.357316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:27.686 [2024-11-26 01:15:50.357322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:27.686 [2024-11-26 01:15:50.357329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.686 [2024-11-26 01:15:50.357418] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:27.686 [2024-11-26 01:15:50.357425] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a159412a-c3ea-4047-94bd-d071d6ab8847 00:30:27.686 [2024-11-26 01:15:50.357430] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:27.686 [2024-11-26 01:15:50.357436] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:27.686 [2024-11-26 01:15:50.357441] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:27.686 [2024-11-26 01:15:50.357446] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:27.686 [2024-11-26 01:15:50.357452] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:27.686 [2024-11-26 01:15:50.357457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:27.686 [2024-11-26 01:15:50.357466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:27.686 [2024-11-26 01:15:50.357471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:27.686 [2024-11-26 01:15:50.357476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:27.686 [2024-11-26 01:15:50.357482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.686 [2024-11-26 01:15:50.357488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:27.686 [2024-11-26 01:15:50.357495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:30:27.686 [2024-11-26 01:15:50.357500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.358725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.686 [2024-11-26 01:15:50.358752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:27.686 [2024-11-26 01:15:50.358759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.211 ms 00:30:27.686 [2024-11-26 01:15:50.358765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.358837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.686 [2024-11-26 01:15:50.358856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:27.686 [2024-11-26 01:15:50.358863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:27.686 [2024-11-26 01:15:50.358868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.363288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.363314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:27.686 [2024-11-26 01:15:50.363321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.363330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.363351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.363357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:27.686 [2024-11-26 01:15:50.363363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.363369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.363418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.363425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:27.686 [2024-11-26 01:15:50.363432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.363437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.363453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.363463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:27.686 [2024-11-26 01:15:50.363472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.363478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.371278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.371311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:27.686 [2024-11-26 01:15:50.371319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.371329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.377335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.377367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:27.686 [2024-11-26 01:15:50.377375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.377381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.377413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.377420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:27.686 [2024-11-26 01:15:50.377426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.377436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.377475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.377484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:27.686 [2024-11-26 01:15:50.377491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.377496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.377547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.377555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:27.686 [2024-11-26 01:15:50.377560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.377566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.377587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.377594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:27.686 [2024-11-26 01:15:50.377602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.377607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.377635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.377641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:27.686 [2024-11-26 01:15:50.377647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.377653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.377684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:27.686 [2024-11-26 01:15:50.377693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:27.686 [2024-11-26 01:15:50.377699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:27.686 [2024-11-26 01:15:50.377704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.686 [2024-11-26 01:15:50.377792] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 30.654 ms, result 0 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:27.686 Remove shared memory files 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid96339 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:27.686 01:15:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:27.686 ************************************ 00:30:27.686 END TEST ftl_upgrade_shutdown 00:30:27.686 ************************************ 00:30:27.687 00:30:27.687 real 1m13.367s 00:30:27.687 user 1m37.638s 00:30:27.687 sys 0m19.657s 00:30:27.687 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.687 01:15:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:27.687 01:15:50 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:27.687 01:15:50 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:27.687 01:15:50 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:27.687 01:15:50 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.687 01:15:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:27.687 ************************************ 00:30:27.687 START TEST ftl_restore_fast 00:30:27.687 ************************************ 00:30:27.687 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:27.946 * Looking for test storage... 00:30:27.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.946 --rc genhtml_branch_coverage=1 00:30:27.946 --rc genhtml_function_coverage=1 00:30:27.946 --rc genhtml_legend=1 00:30:27.946 --rc geninfo_all_blocks=1 00:30:27.946 --rc geninfo_unexecuted_blocks=1 00:30:27.946 00:30:27.946 ' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.946 --rc genhtml_branch_coverage=1 00:30:27.946 --rc genhtml_function_coverage=1 00:30:27.946 --rc genhtml_legend=1 00:30:27.946 --rc geninfo_all_blocks=1 00:30:27.946 --rc geninfo_unexecuted_blocks=1 00:30:27.946 00:30:27.946 ' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.946 --rc genhtml_branch_coverage=1 00:30:27.946 --rc genhtml_function_coverage=1 00:30:27.946 --rc genhtml_legend=1 00:30:27.946 --rc geninfo_all_blocks=1 00:30:27.946 --rc geninfo_unexecuted_blocks=1 00:30:27.946 00:30:27.946 ' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:27.946 --rc genhtml_branch_coverage=1 00:30:27.946 --rc genhtml_function_coverage=1 00:30:27.946 --rc genhtml_legend=1 00:30:27.946 --rc geninfo_all_blocks=1 00:30:27.946 --rc geninfo_unexecuted_blocks=1 00:30:27.946 00:30:27.946 ' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:27.946 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.cUOuq3axDw 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=96747 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 96747 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 96747 ']' 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:27.947 01:15:50 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:27.947 [2024-11-26 01:15:50.837574] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:30:27.947 [2024-11-26 01:15:50.837813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96747 ] 00:30:28.205 [2024-11-26 01:15:50.963868] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:28.205 [2024-11-26 01:15:50.988695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.205 [2024-11-26 01:15:51.005480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.771 01:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.771 01:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:28.771 01:15:51 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:28.771 01:15:51 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:28.771 01:15:51 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:28.771 01:15:51 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:28.771 01:15:51 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:28.771 01:15:51 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:29.029 01:15:51 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:29.029 01:15:51 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:29.029 01:15:51 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:29.029 01:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:29.029 01:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:29.029 01:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:29.029 01:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:29.029 01:15:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:29.287 { 00:30:29.287 "name": "nvme0n1", 00:30:29.287 "aliases": [ 00:30:29.287 "cb4bfecf-3b77-4b73-85a5-10ccc729a6d9" 00:30:29.287 ], 00:30:29.287 "product_name": "NVMe disk", 00:30:29.287 "block_size": 4096, 00:30:29.287 "num_blocks": 1310720, 00:30:29.287 "uuid": "cb4bfecf-3b77-4b73-85a5-10ccc729a6d9", 00:30:29.287 "numa_id": -1, 00:30:29.287 "assigned_rate_limits": { 00:30:29.287 "rw_ios_per_sec": 0, 00:30:29.287 "rw_mbytes_per_sec": 0, 00:30:29.287 "r_mbytes_per_sec": 0, 00:30:29.287 "w_mbytes_per_sec": 0 00:30:29.287 }, 00:30:29.287 "claimed": true, 00:30:29.287 "claim_type": "read_many_write_one", 00:30:29.287 "zoned": false, 00:30:29.287 "supported_io_types": { 00:30:29.287 "read": true, 00:30:29.287 "write": true, 00:30:29.287 "unmap": true, 00:30:29.287 "flush": true, 00:30:29.287 "reset": true, 00:30:29.287 "nvme_admin": true, 00:30:29.287 "nvme_io": true, 00:30:29.287 "nvme_io_md": false, 00:30:29.287 "write_zeroes": true, 00:30:29.287 "zcopy": false, 00:30:29.287 "get_zone_info": false, 00:30:29.287 "zone_management": false, 00:30:29.287 "zone_append": false, 00:30:29.287 "compare": true, 00:30:29.287 "compare_and_write": false, 00:30:29.287 "abort": true, 00:30:29.287 "seek_hole": false, 00:30:29.287 "seek_data": false, 00:30:29.287 "copy": true, 00:30:29.287 "nvme_iov_md": false 00:30:29.287 }, 00:30:29.287 "driver_specific": { 00:30:29.287 "nvme": [ 00:30:29.287 { 00:30:29.287 "pci_address": "0000:00:11.0", 00:30:29.287 "trid": { 00:30:29.287 "trtype": "PCIe", 00:30:29.287 "traddr": "0000:00:11.0" 00:30:29.287 }, 00:30:29.287 "ctrlr_data": { 00:30:29.287 "cntlid": 0, 00:30:29.287 "vendor_id": "0x1b36", 00:30:29.287 "model_number": "QEMU NVMe Ctrl", 00:30:29.287 "serial_number": "12341", 00:30:29.287 "firmware_revision": "8.0.0", 00:30:29.287 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:29.287 "oacs": { 00:30:29.287 "security": 0, 00:30:29.287 "format": 1, 00:30:29.287 "firmware": 0, 00:30:29.287 "ns_manage": 1 00:30:29.287 }, 00:30:29.287 "multi_ctrlr": false, 00:30:29.287 "ana_reporting": false 00:30:29.287 }, 00:30:29.287 "vs": { 00:30:29.287 "nvme_version": "1.4" 00:30:29.287 }, 00:30:29.287 "ns_data": { 00:30:29.287 "id": 1, 00:30:29.287 "can_share": false 00:30:29.287 } 00:30:29.287 } 00:30:29.287 ], 00:30:29.287 "mp_policy": "active_passive" 00:30:29.287 } 00:30:29.287 } 00:30:29.287 ]' 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:29.287 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:29.546 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=9d9e8c99-67f4-4d9e-ab59-225e9f0f02b7 00:30:29.546 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:29.546 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9d9e8c99-67f4-4d9e-ab59-225e9f0f02b7 00:30:29.805 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:30.064 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=7b05f0a4-8340-4ad3-82df-3ec1983080e8 00:30:30.064 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7b05f0a4-8340-4ad3-82df-3ec1983080e8 00:30:30.064 01:15:52 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:30.064 01:15:52 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:30.064 01:15:52 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:30.064 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:30.064 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:30.064 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:30.064 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:30.322 01:15:52 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:30.322 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:30.322 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:30.322 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:30.322 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:30.322 01:15:52 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:30.322 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:30.322 { 00:30:30.322 "name": "1a79d633-d333-4f6e-90f6-6cd1da5e845f", 00:30:30.322 "aliases": [ 00:30:30.322 "lvs/nvme0n1p0" 00:30:30.322 ], 00:30:30.322 "product_name": "Logical Volume", 00:30:30.322 "block_size": 4096, 00:30:30.322 "num_blocks": 26476544, 00:30:30.322 "uuid": "1a79d633-d333-4f6e-90f6-6cd1da5e845f", 00:30:30.322 "assigned_rate_limits": { 00:30:30.322 "rw_ios_per_sec": 0, 00:30:30.322 "rw_mbytes_per_sec": 0, 00:30:30.322 "r_mbytes_per_sec": 0, 00:30:30.322 "w_mbytes_per_sec": 0 00:30:30.322 }, 00:30:30.322 "claimed": false, 00:30:30.322 "zoned": false, 00:30:30.322 "supported_io_types": { 00:30:30.322 "read": true, 00:30:30.322 "write": true, 00:30:30.322 "unmap": true, 00:30:30.322 "flush": false, 00:30:30.322 "reset": true, 00:30:30.322 "nvme_admin": false, 00:30:30.322 "nvme_io": false, 00:30:30.322 "nvme_io_md": false, 00:30:30.322 "write_zeroes": true, 00:30:30.322 "zcopy": false, 00:30:30.322 "get_zone_info": false, 00:30:30.322 "zone_management": false, 00:30:30.322 "zone_append": false, 00:30:30.322 "compare": false, 00:30:30.322 "compare_and_write": false, 00:30:30.322 "abort": false, 00:30:30.322 "seek_hole": true, 00:30:30.322 "seek_data": true, 00:30:30.322 "copy": false, 00:30:30.322 "nvme_iov_md": false 00:30:30.322 }, 00:30:30.322 "driver_specific": { 00:30:30.322 "lvol": { 00:30:30.322 "lvol_store_uuid": "7b05f0a4-8340-4ad3-82df-3ec1983080e8", 00:30:30.322 "base_bdev": "nvme0n1", 00:30:30.322 "thin_provision": true, 00:30:30.322 "num_allocated_clusters": 0, 00:30:30.323 "snapshot": false, 00:30:30.323 "clone": false, 00:30:30.323 "esnap_clone": false 00:30:30.323 } 00:30:30.323 } 00:30:30.323 } 00:30:30.323 ]' 00:30:30.323 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:30.323 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:30.323 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:30.323 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:30.323 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:30.323 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:30.323 01:15:53 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:30.323 01:15:53 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:30.323 01:15:53 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:30.582 01:15:53 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:30.582 01:15:53 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:30.582 01:15:53 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:30.582 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:30.582 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:30.582 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:30.582 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:30.582 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:30.841 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:30.841 { 00:30:30.841 "name": "1a79d633-d333-4f6e-90f6-6cd1da5e845f", 00:30:30.841 "aliases": [ 00:30:30.841 "lvs/nvme0n1p0" 00:30:30.841 ], 00:30:30.841 "product_name": "Logical Volume", 00:30:30.841 "block_size": 4096, 00:30:30.841 "num_blocks": 26476544, 00:30:30.841 "uuid": "1a79d633-d333-4f6e-90f6-6cd1da5e845f", 00:30:30.841 "assigned_rate_limits": { 00:30:30.841 "rw_ios_per_sec": 0, 00:30:30.841 "rw_mbytes_per_sec": 0, 00:30:30.841 "r_mbytes_per_sec": 0, 00:30:30.841 "w_mbytes_per_sec": 0 00:30:30.841 }, 00:30:30.841 "claimed": false, 00:30:30.841 "zoned": false, 00:30:30.841 "supported_io_types": { 00:30:30.841 "read": true, 00:30:30.841 "write": true, 00:30:30.841 "unmap": true, 00:30:30.841 "flush": false, 00:30:30.841 "reset": true, 00:30:30.841 "nvme_admin": false, 00:30:30.841 "nvme_io": false, 00:30:30.841 "nvme_io_md": false, 00:30:30.841 "write_zeroes": true, 00:30:30.841 "zcopy": false, 00:30:30.841 "get_zone_info": false, 00:30:30.841 "zone_management": false, 00:30:30.841 "zone_append": false, 00:30:30.841 "compare": false, 00:30:30.841 "compare_and_write": false, 00:30:30.841 "abort": false, 00:30:30.841 "seek_hole": true, 00:30:30.841 "seek_data": true, 00:30:30.841 "copy": false, 00:30:30.841 "nvme_iov_md": false 00:30:30.841 }, 00:30:30.841 "driver_specific": { 00:30:30.841 "lvol": { 00:30:30.841 "lvol_store_uuid": "7b05f0a4-8340-4ad3-82df-3ec1983080e8", 00:30:30.841 "base_bdev": "nvme0n1", 00:30:30.841 "thin_provision": true, 00:30:30.841 "num_allocated_clusters": 0, 00:30:30.841 "snapshot": false, 00:30:30.841 "clone": false, 00:30:30.841 "esnap_clone": false 00:30:30.841 } 00:30:30.841 } 00:30:30.841 } 00:30:30.841 ]' 00:30:30.841 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:30.841 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:30.841 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:30.841 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:30.841 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:30.841 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:30.841 01:15:53 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:30.841 01:15:53 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:31.102 01:15:53 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:31.102 01:15:53 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:31.102 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:31.102 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:31.102 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:31.102 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:31.102 01:15:53 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1a79d633-d333-4f6e-90f6-6cd1da5e845f 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:31.362 { 00:30:31.362 "name": "1a79d633-d333-4f6e-90f6-6cd1da5e845f", 00:30:31.362 "aliases": [ 00:30:31.362 "lvs/nvme0n1p0" 00:30:31.362 ], 00:30:31.362 "product_name": "Logical Volume", 00:30:31.362 "block_size": 4096, 00:30:31.362 "num_blocks": 26476544, 00:30:31.362 "uuid": "1a79d633-d333-4f6e-90f6-6cd1da5e845f", 00:30:31.362 "assigned_rate_limits": { 00:30:31.362 "rw_ios_per_sec": 0, 00:30:31.362 "rw_mbytes_per_sec": 0, 00:30:31.362 "r_mbytes_per_sec": 0, 00:30:31.362 "w_mbytes_per_sec": 0 00:30:31.362 }, 00:30:31.362 "claimed": false, 00:30:31.362 "zoned": false, 00:30:31.362 "supported_io_types": { 00:30:31.362 "read": true, 00:30:31.362 "write": true, 00:30:31.362 "unmap": true, 00:30:31.362 "flush": false, 00:30:31.362 "reset": true, 00:30:31.362 "nvme_admin": false, 00:30:31.362 "nvme_io": false, 00:30:31.362 "nvme_io_md": false, 00:30:31.362 "write_zeroes": true, 00:30:31.362 "zcopy": false, 00:30:31.362 "get_zone_info": false, 00:30:31.362 "zone_management": false, 00:30:31.362 "zone_append": false, 00:30:31.362 "compare": false, 00:30:31.362 "compare_and_write": false, 00:30:31.362 "abort": false, 00:30:31.362 "seek_hole": true, 00:30:31.362 "seek_data": true, 00:30:31.362 "copy": false, 00:30:31.362 "nvme_iov_md": false 00:30:31.362 }, 00:30:31.362 "driver_specific": { 00:30:31.362 "lvol": { 00:30:31.362 "lvol_store_uuid": "7b05f0a4-8340-4ad3-82df-3ec1983080e8", 00:30:31.362 "base_bdev": "nvme0n1", 00:30:31.362 "thin_provision": true, 00:30:31.362 "num_allocated_clusters": 0, 00:30:31.362 "snapshot": false, 00:30:31.362 "clone": false, 00:30:31.362 "esnap_clone": false 00:30:31.362 } 00:30:31.362 } 00:30:31.362 } 00:30:31.362 ]' 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1a79d633-d333-4f6e-90f6-6cd1da5e845f --l2p_dram_limit 10' 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:31.362 01:15:54 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1a79d633-d333-4f6e-90f6-6cd1da5e845f --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:31.624 [2024-11-26 01:15:54.376718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.376944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:31.624 [2024-11-26 01:15:54.376979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:31.624 [2024-11-26 01:15:54.376989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.377081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.377097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:31.624 [2024-11-26 01:15:54.377117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:31.624 [2024-11-26 01:15:54.377129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.377159] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:31.624 [2024-11-26 01:15:54.377589] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:31.624 [2024-11-26 01:15:54.377625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.377635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:31.624 [2024-11-26 01:15:54.377648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:30:31.624 [2024-11-26 01:15:54.377665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.377709] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b7b160b2-bbfe-4468-82dc-525b9e97e52c 00:30:31.624 [2024-11-26 01:15:54.379506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.379560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:31.624 [2024-11-26 01:15:54.379572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:31.624 [2024-11-26 01:15:54.379583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.388158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.388207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:31.624 [2024-11-26 01:15:54.388218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.496 ms 00:30:31.624 [2024-11-26 01:15:54.388231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.388340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.388353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:31.624 [2024-11-26 01:15:54.388362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:30:31.624 [2024-11-26 01:15:54.388372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.388436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.388448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:31.624 [2024-11-26 01:15:54.388457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:31.624 [2024-11-26 01:15:54.388466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.388489] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:31.624 [2024-11-26 01:15:54.390774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.390812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:31.624 [2024-11-26 01:15:54.390825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.286 ms 00:30:31.624 [2024-11-26 01:15:54.390833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.390891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.390899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:31.624 [2024-11-26 01:15:54.390913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:30:31.624 [2024-11-26 01:15:54.390921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.390940] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:31.624 [2024-11-26 01:15:54.391092] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:31.624 [2024-11-26 01:15:54.391108] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:31.624 [2024-11-26 01:15:54.391119] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:31.624 [2024-11-26 01:15:54.391142] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:31.624 [2024-11-26 01:15:54.391152] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:31.624 [2024-11-26 01:15:54.391167] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:31.624 [2024-11-26 01:15:54.391178] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:31.624 [2024-11-26 01:15:54.391188] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:31.624 [2024-11-26 01:15:54.391195] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:31.624 [2024-11-26 01:15:54.391205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.391212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:31.624 [2024-11-26 01:15:54.391226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:30:31.624 [2024-11-26 01:15:54.391233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.391322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.624 [2024-11-26 01:15:54.391331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:31.624 [2024-11-26 01:15:54.391341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:30:31.624 [2024-11-26 01:15:54.391350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.624 [2024-11-26 01:15:54.391451] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:31.624 [2024-11-26 01:15:54.391461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:31.624 [2024-11-26 01:15:54.391473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:31.624 [2024-11-26 01:15:54.391485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:31.624 [2024-11-26 01:15:54.391497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:31.624 [2024-11-26 01:15:54.391505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:31.624 [2024-11-26 01:15:54.391515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:31.624 [2024-11-26 01:15:54.391523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:31.624 [2024-11-26 01:15:54.391533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:31.624 [2024-11-26 01:15:54.391541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:31.624 [2024-11-26 01:15:54.391551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:31.624 [2024-11-26 01:15:54.391559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:31.624 [2024-11-26 01:15:54.391571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:31.624 [2024-11-26 01:15:54.391579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:31.624 [2024-11-26 01:15:54.391589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:31.624 [2024-11-26 01:15:54.391598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:31.624 [2024-11-26 01:15:54.391608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:31.624 [2024-11-26 01:15:54.391617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:31.624 [2024-11-26 01:15:54.391627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:31.624 [2024-11-26 01:15:54.391635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:31.625 [2024-11-26 01:15:54.391645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:31.625 [2024-11-26 01:15:54.391653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:31.625 [2024-11-26 01:15:54.391663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:31.625 [2024-11-26 01:15:54.391670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:31.625 [2024-11-26 01:15:54.391681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:31.625 [2024-11-26 01:15:54.391689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:31.625 [2024-11-26 01:15:54.391699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:31.625 [2024-11-26 01:15:54.391706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:31.625 [2024-11-26 01:15:54.391718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:31.625 [2024-11-26 01:15:54.391726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:31.625 [2024-11-26 01:15:54.391737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:31.625 [2024-11-26 01:15:54.391745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:31.625 [2024-11-26 01:15:54.391755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:31.625 [2024-11-26 01:15:54.391763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:31.625 [2024-11-26 01:15:54.391773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:31.625 [2024-11-26 01:15:54.391780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:31.625 [2024-11-26 01:15:54.391790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:31.625 [2024-11-26 01:15:54.391797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:31.625 [2024-11-26 01:15:54.391807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:31.625 [2024-11-26 01:15:54.391814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:31.625 [2024-11-26 01:15:54.391823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:31.625 [2024-11-26 01:15:54.391829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:31.625 [2024-11-26 01:15:54.392136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:31.625 [2024-11-26 01:15:54.392217] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:31.625 [2024-11-26 01:15:54.392253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:31.625 [2024-11-26 01:15:54.392300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:31.625 [2024-11-26 01:15:54.392328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:31.625 [2024-11-26 01:15:54.392353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:31.625 [2024-11-26 01:15:54.392375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:31.625 [2024-11-26 01:15:54.392396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:31.625 [2024-11-26 01:15:54.392453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:31.625 [2024-11-26 01:15:54.392476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:31.625 [2024-11-26 01:15:54.392497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:31.625 [2024-11-26 01:15:54.392521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:31.625 [2024-11-26 01:15:54.392559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:31.625 [2024-11-26 01:15:54.392622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:31.625 [2024-11-26 01:15:54.392656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:31.625 [2024-11-26 01:15:54.392685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:31.625 [2024-11-26 01:15:54.392715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:31.625 [2024-11-26 01:15:54.392774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:31.625 [2024-11-26 01:15:54.392859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:31.625 [2024-11-26 01:15:54.392923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:31.625 [2024-11-26 01:15:54.392958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:31.625 [2024-11-26 01:15:54.392987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:31.625 [2024-11-26 01:15:54.393055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:31.625 [2024-11-26 01:15:54.393087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:31.625 [2024-11-26 01:15:54.393119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:31.625 [2024-11-26 01:15:54.393148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:31.625 [2024-11-26 01:15:54.393206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:31.625 [2024-11-26 01:15:54.393236] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:31.625 [2024-11-26 01:15:54.393268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:31.625 [2024-11-26 01:15:54.393298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:31.625 [2024-11-26 01:15:54.393358] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:31.625 [2024-11-26 01:15:54.393416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:31.625 [2024-11-26 01:15:54.393429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:31.625 [2024-11-26 01:15:54.393438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:31.625 [2024-11-26 01:15:54.393452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:31.625 [2024-11-26 01:15:54.393461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:30:31.625 [2024-11-26 01:15:54.393472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:31.625 [2024-11-26 01:15:54.393547] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:31.625 [2024-11-26 01:15:54.393562] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:34.918 [2024-11-26 01:15:57.415805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.918 [2024-11-26 01:15:57.415915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:34.918 [2024-11-26 01:15:57.415934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3022.245 ms 00:30:34.918 [2024-11-26 01:15:57.415946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.430109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.430171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:34.919 [2024-11-26 01:15:57.430185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.042 ms 00:30:34.919 [2024-11-26 01:15:57.430212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.430335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.430348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:34.919 [2024-11-26 01:15:57.430360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:34.919 [2024-11-26 01:15:57.430371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.442892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.442948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:34.919 [2024-11-26 01:15:57.442959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.479 ms 00:30:34.919 [2024-11-26 01:15:57.442973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.443007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.443020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:34.919 [2024-11-26 01:15:57.443028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:34.919 [2024-11-26 01:15:57.443038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.443575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.443618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:34.919 [2024-11-26 01:15:57.443629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:30:34.919 [2024-11-26 01:15:57.443641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.443759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.443770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:34.919 [2024-11-26 01:15:57.443782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:30:34.919 [2024-11-26 01:15:57.443796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.451862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.451908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:34.919 [2024-11-26 01:15:57.451919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.027 ms 00:30:34.919 [2024-11-26 01:15:57.451929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.461564] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:34.919 [2024-11-26 01:15:57.465270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.465457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:34.919 [2024-11-26 01:15:57.465487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.261 ms 00:30:34.919 [2024-11-26 01:15:57.465499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.574047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.574146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:34.919 [2024-11-26 01:15:57.574169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.506 ms 00:30:34.919 [2024-11-26 01:15:57.574179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.574391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.574404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:34.919 [2024-11-26 01:15:57.574416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:30:34.919 [2024-11-26 01:15:57.574424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.580430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.580486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:34.919 [2024-11-26 01:15:57.580504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.976 ms 00:30:34.919 [2024-11-26 01:15:57.580513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.585678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.585728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:34.919 [2024-11-26 01:15:57.585742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.109 ms 00:30:34.919 [2024-11-26 01:15:57.585749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.586142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.586157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:34.919 [2024-11-26 01:15:57.586171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:30:34.919 [2024-11-26 01:15:57.586180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.626976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.627031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:34.919 [2024-11-26 01:15:57.627050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.766 ms 00:30:34.919 [2024-11-26 01:15:57.627058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.634025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.634086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:34.919 [2024-11-26 01:15:57.634102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:30:34.919 [2024-11-26 01:15:57.634117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.639900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.639943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:34.919 [2024-11-26 01:15:57.639955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.727 ms 00:30:34.919 [2024-11-26 01:15:57.639963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.645890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.645934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:34.919 [2024-11-26 01:15:57.645950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.877 ms 00:30:34.919 [2024-11-26 01:15:57.645957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.646009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.646019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:34.919 [2024-11-26 01:15:57.646030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:34.919 [2024-11-26 01:15:57.646038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.646136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:34.919 [2024-11-26 01:15:57.646146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:34.919 [2024-11-26 01:15:57.646160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:34.919 [2024-11-26 01:15:57.646172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:34.919 [2024-11-26 01:15:57.647270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3270.072 ms, result 0 00:30:34.919 { 00:30:34.919 "name": "ftl0", 00:30:34.919 "uuid": "b7b160b2-bbfe-4468-82dc-525b9e97e52c" 00:30:34.919 } 00:30:34.919 01:15:57 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:34.919 01:15:57 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:35.180 01:15:57 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:35.180 01:15:57 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:35.442 [2024-11-26 01:15:58.093514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.093578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:35.442 [2024-11-26 01:15:58.093593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:35.442 [2024-11-26 01:15:58.093604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.093630] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:35.442 [2024-11-26 01:15:58.094466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.094506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:35.442 [2024-11-26 01:15:58.094521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:30:35.442 [2024-11-26 01:15:58.094530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.094806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.094860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:35.442 [2024-11-26 01:15:58.094879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:30:35.442 [2024-11-26 01:15:58.094888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.098149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.098170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:35.442 [2024-11-26 01:15:58.098184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:30:35.442 [2024-11-26 01:15:58.098194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.104538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.104744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:35.442 [2024-11-26 01:15:58.104774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.320 ms 00:30:35.442 [2024-11-26 01:15:58.104786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.107798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.107981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:35.442 [2024-11-26 01:15:58.108024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:30:35.442 [2024-11-26 01:15:58.108045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.115987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.116369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:35.442 [2024-11-26 01:15:58.116568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.657 ms 00:30:35.442 [2024-11-26 01:15:58.116640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.117156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.117255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:35.442 [2024-11-26 01:15:58.117498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:30:35.442 [2024-11-26 01:15:58.117690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.120873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.121137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:35.442 [2024-11-26 01:15:58.121370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:30:35.442 [2024-11-26 01:15:58.121510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.123763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.123959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:35.442 [2024-11-26 01:15:58.124033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:30:35.442 [2024-11-26 01:15:58.124057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.125775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.125981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:35.442 [2024-11-26 01:15:58.126048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:30:35.442 [2024-11-26 01:15:58.126133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.127639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.442 [2024-11-26 01:15:58.127807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:35.442 [2024-11-26 01:15:58.127888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:30:35.442 [2024-11-26 01:15:58.127956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.442 [2024-11-26 01:15:58.128013] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:35.442 [2024-11-26 01:15:58.128062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.128129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.128191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.128231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.128301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.128335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.128391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.128426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.128867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:35.442 [2024-11-26 01:15:58.129209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.129838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.130131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.130164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.130245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:35.443 [2024-11-26 01:15:58.130283] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:35.443 [2024-11-26 01:15:58.130313] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7b160b2-bbfe-4468-82dc-525b9e97e52c 00:30:35.443 [2024-11-26 01:15:58.130379] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:35.443 [2024-11-26 01:15:58.130405] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:35.443 [2024-11-26 01:15:58.130424] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:35.443 [2024-11-26 01:15:58.130447] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:35.443 [2024-11-26 01:15:58.130471] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:35.443 [2024-11-26 01:15:58.130517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:35.443 [2024-11-26 01:15:58.130581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:35.443 [2024-11-26 01:15:58.130606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:35.443 [2024-11-26 01:15:58.130650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:35.443 [2024-11-26 01:15:58.130686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.443 [2024-11-26 01:15:58.130715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:35.443 [2024-11-26 01:15:58.130777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:30:35.443 [2024-11-26 01:15:58.130801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.443 [2024-11-26 01:15:58.133175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.443 [2024-11-26 01:15:58.133332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:35.443 [2024-11-26 01:15:58.133361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.294 ms 00:30:35.444 [2024-11-26 01:15:58.133369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.133522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:35.444 [2024-11-26 01:15:58.133533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:35.444 [2024-11-26 01:15:58.133545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:30:35.444 [2024-11-26 01:15:58.133554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.141458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.141621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:35.444 [2024-11-26 01:15:58.141695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.141719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.141798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.141923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:35.444 [2024-11-26 01:15:58.141953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.141974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.142276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.142307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:35.444 [2024-11-26 01:15:58.142322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.142331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.142351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.142360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:35.444 [2024-11-26 01:15:58.142371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.142379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.155867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.155914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:35.444 [2024-11-26 01:15:58.155936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.155945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.166271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.166319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:35.444 [2024-11-26 01:15:58.166333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.166341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.166420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.166431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:35.444 [2024-11-26 01:15:58.166442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.166449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.166502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.166511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:35.444 [2024-11-26 01:15:58.166522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.166529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.166600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.166609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:35.444 [2024-11-26 01:15:58.166619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.166627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.166663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.166672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:35.444 [2024-11-26 01:15:58.166682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.166689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.166736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.166745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:35.444 [2024-11-26 01:15:58.166756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.166763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.166821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:35.444 [2024-11-26 01:15:58.166832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:35.444 [2024-11-26 01:15:58.166867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:35.444 [2024-11-26 01:15:58.166876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:35.444 [2024-11-26 01:15:58.167023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.466 ms, result 0 00:30:35.444 true 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 96747 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96747 ']' 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96747 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96747 00:30:35.444 killing process with pid 96747 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96747' 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 96747 00:30:35.444 01:15:58 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 96747 00:30:39.667 01:16:02 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:43.871 262144+0 records in 00:30:43.871 262144+0 records out 00:30:43.871 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.90911 s, 275 MB/s 00:30:43.871 01:16:06 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:45.341 01:16:08 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:45.341 [2024-11-26 01:16:08.081252] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:30:45.341 [2024-11-26 01:16:08.081349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96939 ] 00:30:45.341 [2024-11-26 01:16:08.206991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:45.341 [2024-11-26 01:16:08.237057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.603 [2024-11-26 01:16:08.262099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.603 [2024-11-26 01:16:08.352788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:45.603 [2024-11-26 01:16:08.353068] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:45.603 [2024-11-26 01:16:08.511629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.603 [2024-11-26 01:16:08.511689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:45.603 [2024-11-26 01:16:08.511704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:45.603 [2024-11-26 01:16:08.511718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.603 [2024-11-26 01:16:08.511778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.603 [2024-11-26 01:16:08.511789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:45.603 [2024-11-26 01:16:08.511798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:45.603 [2024-11-26 01:16:08.511809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.603 [2024-11-26 01:16:08.511830] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:45.603 [2024-11-26 01:16:08.512134] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:45.603 [2024-11-26 01:16:08.512152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.603 [2024-11-26 01:16:08.512163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:45.603 [2024-11-26 01:16:08.512178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:30:45.603 [2024-11-26 01:16:08.512187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.603 [2024-11-26 01:16:08.513791] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:45.868 [2024-11-26 01:16:08.517513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.868 [2024-11-26 01:16:08.517562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:45.868 [2024-11-26 01:16:08.517581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:30:45.868 [2024-11-26 01:16:08.517593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.868 [2024-11-26 01:16:08.517671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.868 [2024-11-26 01:16:08.517682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:45.868 [2024-11-26 01:16:08.517692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:45.868 [2024-11-26 01:16:08.517701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.868 [2024-11-26 01:16:08.525627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.868 [2024-11-26 01:16:08.525683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:45.868 [2024-11-26 01:16:08.525693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.884 ms 00:30:45.868 [2024-11-26 01:16:08.525708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.868 [2024-11-26 01:16:08.525869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.868 [2024-11-26 01:16:08.525880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:45.868 [2024-11-26 01:16:08.525891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:30:45.868 [2024-11-26 01:16:08.525904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.868 [2024-11-26 01:16:08.525962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.868 [2024-11-26 01:16:08.525972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:45.868 [2024-11-26 01:16:08.525986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:45.868 [2024-11-26 01:16:08.525999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.868 [2024-11-26 01:16:08.526022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:45.868 [2024-11-26 01:16:08.528058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.868 [2024-11-26 01:16:08.528097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:45.868 [2024-11-26 01:16:08.528107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:30:45.868 [2024-11-26 01:16:08.528115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.868 [2024-11-26 01:16:08.528148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.868 [2024-11-26 01:16:08.528157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:45.868 [2024-11-26 01:16:08.528171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:45.868 [2024-11-26 01:16:08.528181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.868 [2024-11-26 01:16:08.528204] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:45.868 [2024-11-26 01:16:08.528225] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:45.868 [2024-11-26 01:16:08.528261] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:45.868 [2024-11-26 01:16:08.528277] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:45.868 [2024-11-26 01:16:08.528383] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:45.868 [2024-11-26 01:16:08.528397] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:45.868 [2024-11-26 01:16:08.528412] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:45.868 [2024-11-26 01:16:08.528424] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:45.868 [2024-11-26 01:16:08.528434] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:45.868 [2024-11-26 01:16:08.528442] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:45.868 [2024-11-26 01:16:08.528450] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:45.868 [2024-11-26 01:16:08.528458] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:45.868 [2024-11-26 01:16:08.528466] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:45.868 [2024-11-26 01:16:08.528474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.868 [2024-11-26 01:16:08.528482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:45.868 [2024-11-26 01:16:08.528493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:30:45.868 [2024-11-26 01:16:08.528504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.868 [2024-11-26 01:16:08.528588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.868 [2024-11-26 01:16:08.528597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:45.868 [2024-11-26 01:16:08.528610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:45.868 [2024-11-26 01:16:08.528618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.868 [2024-11-26 01:16:08.528725] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:45.868 [2024-11-26 01:16:08.528736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:45.868 [2024-11-26 01:16:08.528748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:45.868 [2024-11-26 01:16:08.528757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.868 [2024-11-26 01:16:08.528767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:45.868 [2024-11-26 01:16:08.528774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:45.868 [2024-11-26 01:16:08.528787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:45.868 [2024-11-26 01:16:08.528795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:45.868 [2024-11-26 01:16:08.528803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:45.868 [2024-11-26 01:16:08.528814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:45.868 [2024-11-26 01:16:08.528821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:45.868 [2024-11-26 01:16:08.528828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:45.868 [2024-11-26 01:16:08.528835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:45.868 [2024-11-26 01:16:08.528858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:45.868 [2024-11-26 01:16:08.528865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:45.868 [2024-11-26 01:16:08.528873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.868 [2024-11-26 01:16:08.528880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:45.868 [2024-11-26 01:16:08.528888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:45.868 [2024-11-26 01:16:08.528895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.868 [2024-11-26 01:16:08.528904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:45.868 [2024-11-26 01:16:08.528912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:45.868 [2024-11-26 01:16:08.528919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:45.868 [2024-11-26 01:16:08.528925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:45.868 [2024-11-26 01:16:08.528933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:45.868 [2024-11-26 01:16:08.528939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:45.868 [2024-11-26 01:16:08.528953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:45.868 [2024-11-26 01:16:08.528961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:45.868 [2024-11-26 01:16:08.528968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:45.868 [2024-11-26 01:16:08.528975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:45.868 [2024-11-26 01:16:08.528982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:45.868 [2024-11-26 01:16:08.528989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:45.868 [2024-11-26 01:16:08.528996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:45.868 [2024-11-26 01:16:08.529003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:45.868 [2024-11-26 01:16:08.529010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:45.868 [2024-11-26 01:16:08.529018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:45.868 [2024-11-26 01:16:08.529025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:45.868 [2024-11-26 01:16:08.529032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:45.868 [2024-11-26 01:16:08.529038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:45.868 [2024-11-26 01:16:08.529045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:45.868 [2024-11-26 01:16:08.529052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.868 [2024-11-26 01:16:08.529059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:45.868 [2024-11-26 01:16:08.529068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:45.868 [2024-11-26 01:16:08.529076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.868 [2024-11-26 01:16:08.529083] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:45.868 [2024-11-26 01:16:08.529095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:45.868 [2024-11-26 01:16:08.529105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:45.868 [2024-11-26 01:16:08.529113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:45.868 [2024-11-26 01:16:08.529120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:45.868 [2024-11-26 01:16:08.529127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:45.868 [2024-11-26 01:16:08.529134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:45.868 [2024-11-26 01:16:08.529141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:45.868 [2024-11-26 01:16:08.529149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:45.869 [2024-11-26 01:16:08.529156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:45.869 [2024-11-26 01:16:08.529165] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:45.869 [2024-11-26 01:16:08.529174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:45.869 [2024-11-26 01:16:08.529183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:45.869 [2024-11-26 01:16:08.529191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:45.869 [2024-11-26 01:16:08.529201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:45.869 [2024-11-26 01:16:08.529208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:45.869 [2024-11-26 01:16:08.529216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:45.869 [2024-11-26 01:16:08.529224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:45.869 [2024-11-26 01:16:08.529231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:45.869 [2024-11-26 01:16:08.529238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:45.869 [2024-11-26 01:16:08.529245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:45.869 [2024-11-26 01:16:08.529252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:45.869 [2024-11-26 01:16:08.529259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:45.869 [2024-11-26 01:16:08.529267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:45.869 [2024-11-26 01:16:08.529274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:45.869 [2024-11-26 01:16:08.529282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:45.869 [2024-11-26 01:16:08.529288] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:45.869 [2024-11-26 01:16:08.529297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:45.869 [2024-11-26 01:16:08.529306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:45.869 [2024-11-26 01:16:08.529313] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:45.869 [2024-11-26 01:16:08.529324] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:45.869 [2024-11-26 01:16:08.529331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:45.869 [2024-11-26 01:16:08.529339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.529347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:45.869 [2024-11-26 01:16:08.529354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:30:45.869 [2024-11-26 01:16:08.529364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.543183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.543231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:45.869 [2024-11-26 01:16:08.543243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.777 ms 00:30:45.869 [2024-11-26 01:16:08.543258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.543346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.543357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:45.869 [2024-11-26 01:16:08.543368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:45.869 [2024-11-26 01:16:08.543377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.564291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.564346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:45.869 [2024-11-26 01:16:08.564361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.854 ms 00:30:45.869 [2024-11-26 01:16:08.564369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.564416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.564426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:45.869 [2024-11-26 01:16:08.564444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:45.869 [2024-11-26 01:16:08.564458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.565021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.565061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:45.869 [2024-11-26 01:16:08.565073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:30:45.869 [2024-11-26 01:16:08.565082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.565242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.565253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:45.869 [2024-11-26 01:16:08.565261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:30:45.869 [2024-11-26 01:16:08.565269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.573726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.573775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:45.869 [2024-11-26 01:16:08.573787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.432 ms 00:30:45.869 [2024-11-26 01:16:08.573811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.577710] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:45.869 [2024-11-26 01:16:08.577771] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:45.869 [2024-11-26 01:16:08.577785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.577795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:45.869 [2024-11-26 01:16:08.577805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.855 ms 00:30:45.869 [2024-11-26 01:16:08.577814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.593652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.593709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:45.869 [2024-11-26 01:16:08.593721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.770 ms 00:30:45.869 [2024-11-26 01:16:08.593736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.596587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.596636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:45.869 [2024-11-26 01:16:08.596646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.798 ms 00:30:45.869 [2024-11-26 01:16:08.596654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.599359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.599402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:45.869 [2024-11-26 01:16:08.599413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:30:45.869 [2024-11-26 01:16:08.599430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.599773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.599785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:45.869 [2024-11-26 01:16:08.599799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:30:45.869 [2024-11-26 01:16:08.599807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.622670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.622743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:45.869 [2024-11-26 01:16:08.622757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.842 ms 00:30:45.869 [2024-11-26 01:16:08.622770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.630984] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:45.869 [2024-11-26 01:16:08.633919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.633963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:45.869 [2024-11-26 01:16:08.633977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.099 ms 00:30:45.869 [2024-11-26 01:16:08.633990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.634068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.634091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:45.869 [2024-11-26 01:16:08.634105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:45.869 [2024-11-26 01:16:08.634114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.634180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.634191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:45.869 [2024-11-26 01:16:08.634203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:45.869 [2024-11-26 01:16:08.634211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.634232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.634240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:45.869 [2024-11-26 01:16:08.634249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:45.869 [2024-11-26 01:16:08.634263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.869 [2024-11-26 01:16:08.634298] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:45.869 [2024-11-26 01:16:08.634309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.869 [2024-11-26 01:16:08.634317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:45.869 [2024-11-26 01:16:08.634325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:45.870 [2024-11-26 01:16:08.634336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.870 [2024-11-26 01:16:08.639586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.870 [2024-11-26 01:16:08.639648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:45.870 [2024-11-26 01:16:08.639659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.231 ms 00:30:45.870 [2024-11-26 01:16:08.639668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.870 [2024-11-26 01:16:08.639752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:45.870 [2024-11-26 01:16:08.639766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:45.870 [2024-11-26 01:16:08.639775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:45.870 [2024-11-26 01:16:08.639784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:45.870 [2024-11-26 01:16:08.641121] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.910 ms, result 0 00:30:46.816  [2024-11-26T01:16:10.674Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-26T01:16:12.056Z] Copying: 23/1024 [MB] (12 MBps) [2024-11-26T01:16:12.996Z] Copying: 56/1024 [MB] (32 MBps) [2024-11-26T01:16:13.933Z] Copying: 77/1024 [MB] (21 MBps) [2024-11-26T01:16:14.869Z] Copying: 107/1024 [MB] (30 MBps) [2024-11-26T01:16:15.813Z] Copying: 149/1024 [MB] (41 MBps) [2024-11-26T01:16:16.756Z] Copying: 165/1024 [MB] (16 MBps) [2024-11-26T01:16:17.699Z] Copying: 181/1024 [MB] (15 MBps) [2024-11-26T01:16:19.075Z] Copying: 201/1024 [MB] (19 MBps) [2024-11-26T01:16:19.664Z] Copying: 248/1024 [MB] (46 MBps) [2024-11-26T01:16:21.053Z] Copying: 276/1024 [MB] (28 MBps) [2024-11-26T01:16:21.995Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-26T01:16:22.933Z] Copying: 299/1024 [MB] (10 MBps) [2024-11-26T01:16:23.879Z] Copying: 324/1024 [MB] (24 MBps) [2024-11-26T01:16:24.824Z] Copying: 342/1024 [MB] (18 MBps) [2024-11-26T01:16:25.770Z] Copying: 354/1024 [MB] (11 MBps) [2024-11-26T01:16:26.713Z] Copying: 371/1024 [MB] (17 MBps) [2024-11-26T01:16:27.656Z] Copying: 387/1024 [MB] (15 MBps) [2024-11-26T01:16:29.040Z] Copying: 409/1024 [MB] (21 MBps) [2024-11-26T01:16:29.999Z] Copying: 424/1024 [MB] (15 MBps) [2024-11-26T01:16:30.941Z] Copying: 438/1024 [MB] (14 MBps) [2024-11-26T01:16:31.884Z] Copying: 459/1024 [MB] (20 MBps) [2024-11-26T01:16:32.836Z] Copying: 484/1024 [MB] (25 MBps) [2024-11-26T01:16:33.782Z] Copying: 504/1024 [MB] (20 MBps) [2024-11-26T01:16:34.727Z] Copying: 522/1024 [MB] (17 MBps) [2024-11-26T01:16:35.670Z] Copying: 538/1024 [MB] (16 MBps) [2024-11-26T01:16:37.061Z] Copying: 552/1024 [MB] (13 MBps) [2024-11-26T01:16:38.003Z] Copying: 568/1024 [MB] (15 MBps) [2024-11-26T01:16:38.949Z] Copying: 587/1024 [MB] (19 MBps) [2024-11-26T01:16:39.894Z] Copying: 598/1024 [MB] (11 MBps) [2024-11-26T01:16:40.839Z] Copying: 610/1024 [MB] (11 MBps) [2024-11-26T01:16:41.786Z] Copying: 621/1024 [MB] (11 MBps) [2024-11-26T01:16:42.830Z] Copying: 632/1024 [MB] (11 MBps) [2024-11-26T01:16:43.775Z] Copying: 644/1024 [MB] (11 MBps) [2024-11-26T01:16:44.721Z] Copying: 657/1024 [MB] (12 MBps) [2024-11-26T01:16:45.667Z] Copying: 670/1024 [MB] (13 MBps) [2024-11-26T01:16:47.055Z] Copying: 683/1024 [MB] (12 MBps) [2024-11-26T01:16:47.999Z] Copying: 698/1024 [MB] (15 MBps) [2024-11-26T01:16:48.944Z] Copying: 719/1024 [MB] (20 MBps) [2024-11-26T01:16:49.887Z] Copying: 737/1024 [MB] (18 MBps) [2024-11-26T01:16:50.831Z] Copying: 752/1024 [MB] (14 MBps) [2024-11-26T01:16:51.771Z] Copying: 768/1024 [MB] (16 MBps) [2024-11-26T01:16:52.714Z] Copying: 783/1024 [MB] (14 MBps) [2024-11-26T01:16:53.658Z] Copying: 798/1024 [MB] (14 MBps) [2024-11-26T01:16:55.044Z] Copying: 810/1024 [MB] (12 MBps) [2024-11-26T01:16:55.985Z] Copying: 821/1024 [MB] (11 MBps) [2024-11-26T01:16:56.930Z] Copying: 841/1024 [MB] (19 MBps) [2024-11-26T01:16:57.874Z] Copying: 858/1024 [MB] (16 MBps) [2024-11-26T01:16:58.820Z] Copying: 878/1024 [MB] (20 MBps) [2024-11-26T01:16:59.764Z] Copying: 891/1024 [MB] (12 MBps) [2024-11-26T01:17:00.710Z] Copying: 910/1024 [MB] (18 MBps) [2024-11-26T01:17:02.096Z] Copying: 927/1024 [MB] (17 MBps) [2024-11-26T01:17:02.670Z] Copying: 946/1024 [MB] (18 MBps) [2024-11-26T01:17:04.055Z] Copying: 964/1024 [MB] (18 MBps) [2024-11-26T01:17:04.997Z] Copying: 984/1024 [MB] (19 MBps) [2024-11-26T01:17:05.942Z] Copying: 1008/1024 [MB] (23 MBps) [2024-11-26T01:17:05.942Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-26 01:17:05.599534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.025 [2024-11-26 01:17:05.599581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:43.025 [2024-11-26 01:17:05.599594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:43.025 [2024-11-26 01:17:05.599605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.025 [2024-11-26 01:17:05.599625] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:43.025 [2024-11-26 01:17:05.600121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.025 [2024-11-26 01:17:05.600138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:43.025 [2024-11-26 01:17:05.600153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:31:43.025 [2024-11-26 01:17:05.600161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.025 [2024-11-26 01:17:05.601979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.025 [2024-11-26 01:17:05.602012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:43.025 [2024-11-26 01:17:05.602022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:31:43.025 [2024-11-26 01:17:05.602030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.025 [2024-11-26 01:17:05.602059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.025 [2024-11-26 01:17:05.602067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:43.025 [2024-11-26 01:17:05.602074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:43.025 [2024-11-26 01:17:05.602081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.025 [2024-11-26 01:17:05.602133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.025 [2024-11-26 01:17:05.602146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:43.025 [2024-11-26 01:17:05.602153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:43.026 [2024-11-26 01:17:05.602160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.026 [2024-11-26 01:17:05.602173] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:43.026 [2024-11-26 01:17:05.602186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:43.026 [2024-11-26 01:17:05.602838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:43.027 [2024-11-26 01:17:05.602961] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:43.027 [2024-11-26 01:17:05.602968] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7b160b2-bbfe-4468-82dc-525b9e97e52c 00:31:43.027 [2024-11-26 01:17:05.602976] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:43.027 [2024-11-26 01:17:05.602983] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:43.027 [2024-11-26 01:17:05.602994] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:43.027 [2024-11-26 01:17:05.603001] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:43.027 [2024-11-26 01:17:05.603008] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:43.027 [2024-11-26 01:17:05.603015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:43.027 [2024-11-26 01:17:05.603025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:43.027 [2024-11-26 01:17:05.603032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:43.027 [2024-11-26 01:17:05.603038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:43.027 [2024-11-26 01:17:05.603045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.027 [2024-11-26 01:17:05.603052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:43.027 [2024-11-26 01:17:05.603062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:31:43.027 [2024-11-26 01:17:05.603069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.604513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.027 [2024-11-26 01:17:05.604531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:43.027 [2024-11-26 01:17:05.604543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.430 ms 00:31:43.027 [2024-11-26 01:17:05.604550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.604626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.027 [2024-11-26 01:17:05.604639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:43.027 [2024-11-26 01:17:05.604647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:31:43.027 [2024-11-26 01:17:05.604653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.609776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.609801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:43.027 [2024-11-26 01:17:05.609810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.609817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.609885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.609901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:43.027 [2024-11-26 01:17:05.609908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.609915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.609964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.609973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:43.027 [2024-11-26 01:17:05.609982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.609989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.610003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.610010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:43.027 [2024-11-26 01:17:05.610021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.610027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.619294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.619336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:43.027 [2024-11-26 01:17:05.619354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.619362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.626705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.626747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:43.027 [2024-11-26 01:17:05.626759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.626771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.626813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.626822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:43.027 [2024-11-26 01:17:05.626830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.626837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.627081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.627103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:43.027 [2024-11-26 01:17:05.627124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.627137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.627192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.627201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:43.027 [2024-11-26 01:17:05.627209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.627216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.627239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.627248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:43.027 [2024-11-26 01:17:05.627256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.627263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.627300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.627309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:43.027 [2024-11-26 01:17:05.627316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.627324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.627361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:43.027 [2024-11-26 01:17:05.627371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:43.027 [2024-11-26 01:17:05.627379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:43.027 [2024-11-26 01:17:05.627389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.027 [2024-11-26 01:17:05.627501] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.943 ms, result 0 00:31:43.287 00:31:43.287 00:31:43.288 01:17:06 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:43.288 [2024-11-26 01:17:06.104178] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:31:43.288 [2024-11-26 01:17:06.104673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97524 ] 00:31:43.548 [2024-11-26 01:17:06.239614] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:43.548 [2024-11-26 01:17:06.269174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.548 [2024-11-26 01:17:06.297003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.548 [2024-11-26 01:17:06.413828] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:43.548 [2024-11-26 01:17:06.413936] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:43.810 [2024-11-26 01:17:06.575297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.575356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:43.810 [2024-11-26 01:17:06.575372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:43.810 [2024-11-26 01:17:06.575381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.575440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.575451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:43.810 [2024-11-26 01:17:06.575464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:43.810 [2024-11-26 01:17:06.575479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.575500] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:43.810 [2024-11-26 01:17:06.575782] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:43.810 [2024-11-26 01:17:06.575799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.575811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:43.810 [2024-11-26 01:17:06.575820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:31:43.810 [2024-11-26 01:17:06.575829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.576154] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:43.810 [2024-11-26 01:17:06.576183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.576193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:43.810 [2024-11-26 01:17:06.576211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:43.810 [2024-11-26 01:17:06.576223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.576281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.576291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:43.810 [2024-11-26 01:17:06.576303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:31:43.810 [2024-11-26 01:17:06.576314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.576601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.576621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:43.810 [2024-11-26 01:17:06.576631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:31:43.810 [2024-11-26 01:17:06.576641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.576725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.576737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:43.810 [2024-11-26 01:17:06.576749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:43.810 [2024-11-26 01:17:06.576756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.576780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.576794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:43.810 [2024-11-26 01:17:06.576803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:43.810 [2024-11-26 01:17:06.576811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.576837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:43.810 [2024-11-26 01:17:06.578997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.579034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:43.810 [2024-11-26 01:17:06.579045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:31:43.810 [2024-11-26 01:17:06.579053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.579095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.579108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:43.810 [2024-11-26 01:17:06.579120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:43.810 [2024-11-26 01:17:06.579128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.579183] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:43.810 [2024-11-26 01:17:06.579210] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:43.810 [2024-11-26 01:17:06.579246] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:43.810 [2024-11-26 01:17:06.579262] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:43.810 [2024-11-26 01:17:06.579368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:43.810 [2024-11-26 01:17:06.579380] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:43.810 [2024-11-26 01:17:06.579391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:43.810 [2024-11-26 01:17:06.579409] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:43.810 [2024-11-26 01:17:06.579418] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:43.810 [2024-11-26 01:17:06.579427] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:43.810 [2024-11-26 01:17:06.579434] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:43.810 [2024-11-26 01:17:06.579442] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:43.810 [2024-11-26 01:17:06.579454] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:43.810 [2024-11-26 01:17:06.579465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.579473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:43.810 [2024-11-26 01:17:06.579482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:31:43.810 [2024-11-26 01:17:06.579493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.579580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.579592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:43.810 [2024-11-26 01:17:06.579604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:31:43.810 [2024-11-26 01:17:06.579617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.579720] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:43.810 [2024-11-26 01:17:06.579735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:43.810 [2024-11-26 01:17:06.579745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.810 [2024-11-26 01:17:06.579754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.810 [2024-11-26 01:17:06.579763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:43.810 [2024-11-26 01:17:06.579770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:43.810 [2024-11-26 01:17:06.579777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:43.810 [2024-11-26 01:17:06.579785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:43.810 [2024-11-26 01:17:06.579799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:43.810 [2024-11-26 01:17:06.579806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.810 [2024-11-26 01:17:06.579812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:43.810 [2024-11-26 01:17:06.579820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:43.810 [2024-11-26 01:17:06.579828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:43.810 [2024-11-26 01:17:06.579835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:43.810 [2024-11-26 01:17:06.580100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:43.810 [2024-11-26 01:17:06.580129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:43.810 [2024-11-26 01:17:06.580175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:43.810 [2024-11-26 01:17:06.580194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:43.810 [2024-11-26 01:17:06.580232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.810 [2024-11-26 01:17:06.580269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:43.810 [2024-11-26 01:17:06.580287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.810 [2024-11-26 01:17:06.580324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:43.810 [2024-11-26 01:17:06.580342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.810 [2024-11-26 01:17:06.580378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:43.810 [2024-11-26 01:17:06.580396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:43.810 [2024-11-26 01:17:06.580432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:43.810 [2024-11-26 01:17:06.580450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.810 [2024-11-26 01:17:06.580573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:43.810 [2024-11-26 01:17:06.580592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:43.810 [2024-11-26 01:17:06.580610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:43.810 [2024-11-26 01:17:06.580628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:43.810 [2024-11-26 01:17:06.580646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:43.810 [2024-11-26 01:17:06.580664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:43.810 [2024-11-26 01:17:06.580700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:43.810 [2024-11-26 01:17:06.580717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580738] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:43.810 [2024-11-26 01:17:06.580759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:43.810 [2024-11-26 01:17:06.580782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:43.810 [2024-11-26 01:17:06.580801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:43.810 [2024-11-26 01:17:06.580820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:43.810 [2024-11-26 01:17:06.580853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:43.810 [2024-11-26 01:17:06.580877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:43.810 [2024-11-26 01:17:06.580896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:43.810 [2024-11-26 01:17:06.580915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:43.810 [2024-11-26 01:17:06.580932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:43.810 [2024-11-26 01:17:06.581003] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:43.810 [2024-11-26 01:17:06.581040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.810 [2024-11-26 01:17:06.581071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:43.810 [2024-11-26 01:17:06.581100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:43.810 [2024-11-26 01:17:06.581129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:43.810 [2024-11-26 01:17:06.581156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:43.810 [2024-11-26 01:17:06.581185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:43.810 [2024-11-26 01:17:06.581212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:43.810 [2024-11-26 01:17:06.581270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:43.810 [2024-11-26 01:17:06.581295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:43.810 [2024-11-26 01:17:06.581303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:43.810 [2024-11-26 01:17:06.581311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:43.810 [2024-11-26 01:17:06.581322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:43.810 [2024-11-26 01:17:06.581330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:43.810 [2024-11-26 01:17:06.581337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:43.810 [2024-11-26 01:17:06.581345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:43.810 [2024-11-26 01:17:06.581353] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:43.810 [2024-11-26 01:17:06.581362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:43.810 [2024-11-26 01:17:06.581371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:43.810 [2024-11-26 01:17:06.581379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:43.810 [2024-11-26 01:17:06.581387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:43.810 [2024-11-26 01:17:06.581394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:43.810 [2024-11-26 01:17:06.581404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.581415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:43.810 [2024-11-26 01:17:06.581424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.751 ms 00:31:43.810 [2024-11-26 01:17:06.581433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.591389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.591558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:43.810 [2024-11-26 01:17:06.591577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.894 ms 00:31:43.810 [2024-11-26 01:17:06.591595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.591691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.591706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:43.810 [2024-11-26 01:17:06.591715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:31:43.810 [2024-11-26 01:17:06.591730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.612967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.613033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:43.810 [2024-11-26 01:17:06.613053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.177 ms 00:31:43.810 [2024-11-26 01:17:06.613065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.613127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.613144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:43.810 [2024-11-26 01:17:06.613158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:43.810 [2024-11-26 01:17:06.613172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.613323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.613348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:43.810 [2024-11-26 01:17:06.613362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:31:43.810 [2024-11-26 01:17:06.613376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.810 [2024-11-26 01:17:06.613571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.810 [2024-11-26 01:17:06.613590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:43.811 [2024-11-26 01:17:06.613603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:31:43.811 [2024-11-26 01:17:06.613614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.623470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.623664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:43.811 [2024-11-26 01:17:06.623693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.827 ms 00:31:43.811 [2024-11-26 01:17:06.623702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.623892] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:43.811 [2024-11-26 01:17:06.623907] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:43.811 [2024-11-26 01:17:06.623918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.623926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:43.811 [2024-11-26 01:17:06.623937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:31:43.811 [2024-11-26 01:17:06.623948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.636265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.636306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:43.811 [2024-11-26 01:17:06.636318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.293 ms 00:31:43.811 [2024-11-26 01:17:06.636325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.636456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.636466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:43.811 [2024-11-26 01:17:06.636486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:31:43.811 [2024-11-26 01:17:06.636494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.636545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.636559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:43.811 [2024-11-26 01:17:06.636569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:43.811 [2024-11-26 01:17:06.636581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.637080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.637100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:43.811 [2024-11-26 01:17:06.637109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:31:43.811 [2024-11-26 01:17:06.637117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.637134] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:43.811 [2024-11-26 01:17:06.637148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.637160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:43.811 [2024-11-26 01:17:06.637170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:43.811 [2024-11-26 01:17:06.637178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.646951] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:43.811 [2024-11-26 01:17:06.647114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.647125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:43.811 [2024-11-26 01:17:06.647135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.917 ms 00:31:43.811 [2024-11-26 01:17:06.647145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.649729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.649766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:43.811 [2024-11-26 01:17:06.649776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.561 ms 00:31:43.811 [2024-11-26 01:17:06.649787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.649904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.649917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:43.811 [2024-11-26 01:17:06.649932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:43.811 [2024-11-26 01:17:06.649948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.649977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.650007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:43.811 [2024-11-26 01:17:06.650019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:43.811 [2024-11-26 01:17:06.650026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.650063] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:43.811 [2024-11-26 01:17:06.650073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.650081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:43.811 [2024-11-26 01:17:06.650089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:43.811 [2024-11-26 01:17:06.650112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.656312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.656365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:43.811 [2024-11-26 01:17:06.656377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.178 ms 00:31:43.811 [2024-11-26 01:17:06.656385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.656474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.811 [2024-11-26 01:17:06.656493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:43.811 [2024-11-26 01:17:06.656503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:43.811 [2024-11-26 01:17:06.656511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.811 [2024-11-26 01:17:06.657716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 81.990 ms, result 0 00:31:45.198  [2024-11-26T01:17:09.059Z] Copying: 20/1024 [MB] (20 MBps) [2024-11-26T01:17:10.001Z] Copying: 34/1024 [MB] (13 MBps) [2024-11-26T01:17:10.944Z] Copying: 55/1024 [MB] (21 MBps) [2024-11-26T01:17:11.888Z] Copying: 75/1024 [MB] (19 MBps) [2024-11-26T01:17:13.277Z] Copying: 85/1024 [MB] (10 MBps) [2024-11-26T01:17:13.851Z] Copying: 100/1024 [MB] (14 MBps) [2024-11-26T01:17:15.237Z] Copying: 110/1024 [MB] (10 MBps) [2024-11-26T01:17:16.216Z] Copying: 121/1024 [MB] (10 MBps) [2024-11-26T01:17:17.187Z] Copying: 134/1024 [MB] (12 MBps) [2024-11-26T01:17:18.129Z] Copying: 145/1024 [MB] (10 MBps) [2024-11-26T01:17:19.075Z] Copying: 159/1024 [MB] (14 MBps) [2024-11-26T01:17:20.019Z] Copying: 170/1024 [MB] (10 MBps) [2024-11-26T01:17:20.963Z] Copying: 189/1024 [MB] (18 MBps) [2024-11-26T01:17:21.909Z] Copying: 199/1024 [MB] (10 MBps) [2024-11-26T01:17:22.854Z] Copying: 210/1024 [MB] (10 MBps) [2024-11-26T01:17:24.243Z] Copying: 220/1024 [MB] (10 MBps) [2024-11-26T01:17:25.187Z] Copying: 231/1024 [MB] (10 MBps) [2024-11-26T01:17:26.131Z] Copying: 248/1024 [MB] (17 MBps) [2024-11-26T01:17:27.076Z] Copying: 259/1024 [MB] (10 MBps) [2024-11-26T01:17:28.019Z] Copying: 280/1024 [MB] (21 MBps) [2024-11-26T01:17:28.965Z] Copying: 296/1024 [MB] (15 MBps) [2024-11-26T01:17:29.912Z] Copying: 307/1024 [MB] (11 MBps) [2024-11-26T01:17:30.858Z] Copying: 318/1024 [MB] (10 MBps) [2024-11-26T01:17:32.249Z] Copying: 328/1024 [MB] (10 MBps) [2024-11-26T01:17:33.194Z] Copying: 341/1024 [MB] (12 MBps) [2024-11-26T01:17:34.138Z] Copying: 356/1024 [MB] (15 MBps) [2024-11-26T01:17:35.079Z] Copying: 371/1024 [MB] (14 MBps) [2024-11-26T01:17:36.019Z] Copying: 389/1024 [MB] (18 MBps) [2024-11-26T01:17:36.958Z] Copying: 412/1024 [MB] (23 MBps) [2024-11-26T01:17:37.900Z] Copying: 440/1024 [MB] (27 MBps) [2024-11-26T01:17:39.288Z] Copying: 467/1024 [MB] (27 MBps) [2024-11-26T01:17:39.859Z] Copying: 488/1024 [MB] (21 MBps) [2024-11-26T01:17:41.247Z] Copying: 506/1024 [MB] (17 MBps) [2024-11-26T01:17:42.186Z] Copying: 521/1024 [MB] (15 MBps) [2024-11-26T01:17:43.129Z] Copying: 541/1024 [MB] (19 MBps) [2024-11-26T01:17:44.073Z] Copying: 556/1024 [MB] (15 MBps) [2024-11-26T01:17:45.013Z] Copying: 571/1024 [MB] (14 MBps) [2024-11-26T01:17:45.956Z] Copying: 596/1024 [MB] (25 MBps) [2024-11-26T01:17:46.901Z] Copying: 608/1024 [MB] (11 MBps) [2024-11-26T01:17:48.288Z] Copying: 626/1024 [MB] (18 MBps) [2024-11-26T01:17:48.861Z] Copying: 637/1024 [MB] (10 MBps) [2024-11-26T01:17:50.248Z] Copying: 647/1024 [MB] (10 MBps) [2024-11-26T01:17:50.868Z] Copying: 658/1024 [MB] (10 MBps) [2024-11-26T01:17:52.289Z] Copying: 668/1024 [MB] (10 MBps) [2024-11-26T01:17:52.862Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-26T01:17:54.252Z] Copying: 689/1024 [MB] (10 MBps) [2024-11-26T01:17:55.198Z] Copying: 699/1024 [MB] (10 MBps) [2024-11-26T01:17:56.141Z] Copying: 710/1024 [MB] (10 MBps) [2024-11-26T01:17:57.086Z] Copying: 720/1024 [MB] (10 MBps) [2024-11-26T01:17:58.029Z] Copying: 730/1024 [MB] (10 MBps) [2024-11-26T01:17:58.971Z] Copying: 741/1024 [MB] (10 MBps) [2024-11-26T01:17:59.915Z] Copying: 752/1024 [MB] (11 MBps) [2024-11-26T01:18:00.860Z] Copying: 763/1024 [MB] (10 MBps) [2024-11-26T01:18:02.245Z] Copying: 773/1024 [MB] (10 MBps) [2024-11-26T01:18:03.189Z] Copying: 787/1024 [MB] (14 MBps) [2024-11-26T01:18:04.133Z] Copying: 798/1024 [MB] (10 MBps) [2024-11-26T01:18:05.076Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-26T01:18:06.019Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-26T01:18:06.962Z] Copying: 830/1024 [MB] (10 MBps) [2024-11-26T01:18:07.906Z] Copying: 841/1024 [MB] (10 MBps) [2024-11-26T01:18:09.290Z] Copying: 853/1024 [MB] (11 MBps) [2024-11-26T01:18:09.861Z] Copying: 870/1024 [MB] (16 MBps) [2024-11-26T01:18:11.247Z] Copying: 880/1024 [MB] (10 MBps) [2024-11-26T01:18:12.188Z] Copying: 891/1024 [MB] (10 MBps) [2024-11-26T01:18:13.130Z] Copying: 901/1024 [MB] (10 MBps) [2024-11-26T01:18:14.070Z] Copying: 918/1024 [MB] (16 MBps) [2024-11-26T01:18:15.011Z] Copying: 932/1024 [MB] (14 MBps) [2024-11-26T01:18:15.949Z] Copying: 952/1024 [MB] (20 MBps) [2024-11-26T01:18:16.889Z] Copying: 964/1024 [MB] (11 MBps) [2024-11-26T01:18:18.274Z] Copying: 989/1024 [MB] (25 MBps) [2024-11-26T01:18:18.848Z] Copying: 1009/1024 [MB] (20 MBps) [2024-11-26T01:18:18.848Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-26 01:18:18.795961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.931 [2024-11-26 01:18:18.796052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:55.931 [2024-11-26 01:18:18.796071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:55.931 [2024-11-26 01:18:18.796082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.931 [2024-11-26 01:18:18.796116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:55.931 [2024-11-26 01:18:18.796972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.931 [2024-11-26 01:18:18.797004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:55.931 [2024-11-26 01:18:18.797018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:32:55.931 [2024-11-26 01:18:18.797029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.931 [2024-11-26 01:18:18.797300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.931 [2024-11-26 01:18:18.797313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:55.931 [2024-11-26 01:18:18.797324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:32:55.931 [2024-11-26 01:18:18.797333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.931 [2024-11-26 01:18:18.797372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.931 [2024-11-26 01:18:18.797383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:55.931 [2024-11-26 01:18:18.797393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:55.931 [2024-11-26 01:18:18.797402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.931 [2024-11-26 01:18:18.797471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.931 [2024-11-26 01:18:18.797483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:55.931 [2024-11-26 01:18:18.797493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:55.931 [2024-11-26 01:18:18.797510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.931 [2024-11-26 01:18:18.797526] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:55.931 [2024-11-26 01:18:18.797540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:55.931 [2024-11-26 01:18:18.797867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.797992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:55.932 [2024-11-26 01:18:18.798509] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:55.932 [2024-11-26 01:18:18.798518] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7b160b2-bbfe-4468-82dc-525b9e97e52c 00:32:55.932 [2024-11-26 01:18:18.798527] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:55.932 [2024-11-26 01:18:18.798535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:55.932 [2024-11-26 01:18:18.798543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:55.932 [2024-11-26 01:18:18.798558] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:55.932 [2024-11-26 01:18:18.798567] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:55.932 [2024-11-26 01:18:18.798575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:55.932 [2024-11-26 01:18:18.798583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:55.932 [2024-11-26 01:18:18.798591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:55.932 [2024-11-26 01:18:18.798598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:55.932 [2024-11-26 01:18:18.798613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.932 [2024-11-26 01:18:18.798622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:55.932 [2024-11-26 01:18:18.798632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:32:55.932 [2024-11-26 01:18:18.798644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.932 [2024-11-26 01:18:18.801818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.932 [2024-11-26 01:18:18.801878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:55.932 [2024-11-26 01:18:18.801893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:32:55.932 [2024-11-26 01:18:18.801903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.932 [2024-11-26 01:18:18.802048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:55.932 [2024-11-26 01:18:18.802059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:55.932 [2024-11-26 01:18:18.802074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:32:55.932 [2024-11-26 01:18:18.802082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.932 [2024-11-26 01:18:18.810217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.932 [2024-11-26 01:18:18.810418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:55.932 [2024-11-26 01:18:18.810490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.932 [2024-11-26 01:18:18.810514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.932 [2024-11-26 01:18:18.810605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.810628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:55.933 [2024-11-26 01:18:18.810655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.810674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.810751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.810775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:55.933 [2024-11-26 01:18:18.810798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.810909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.810951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.811130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:55.933 [2024-11-26 01:18:18.811176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.811246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.824795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.824990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:55.933 [2024-11-26 01:18:18.825047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.825071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.835885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.836056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:55.933 [2024-11-26 01:18:18.836122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.836145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.836206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.836228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:55.933 [2024-11-26 01:18:18.836249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.836267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.836317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.836340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:55.933 [2024-11-26 01:18:18.836360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.836458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.836542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.836565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:55.933 [2024-11-26 01:18:18.836595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.836663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.836716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.836756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:55.933 [2024-11-26 01:18:18.836777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.836796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.836913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.836972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:55.933 [2024-11-26 01:18:18.837015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.837038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.837127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:55.933 [2024-11-26 01:18:18.837154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:55.933 [2024-11-26 01:18:18.837174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:55.933 [2024-11-26 01:18:18.837340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:55.933 [2024-11-26 01:18:18.837505] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.520 ms, result 0 00:32:56.195 00:32:56.195 00:32:56.195 01:18:19 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:58.743 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:58.743 01:18:21 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:58.743 [2024-11-26 01:18:21.417082] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:32:58.743 [2024-11-26 01:18:21.417233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98291 ] 00:32:58.743 [2024-11-26 01:18:21.554830] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:58.743 [2024-11-26 01:18:21.585481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:58.743 [2024-11-26 01:18:21.620143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.006 [2024-11-26 01:18:21.730076] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:59.006 [2024-11-26 01:18:21.730167] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:59.006 [2024-11-26 01:18:21.892322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.006 [2024-11-26 01:18:21.892385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:59.006 [2024-11-26 01:18:21.892401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:59.006 [2024-11-26 01:18:21.892410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.006 [2024-11-26 01:18:21.892468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.006 [2024-11-26 01:18:21.892483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:59.006 [2024-11-26 01:18:21.892492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:59.006 [2024-11-26 01:18:21.892503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.006 [2024-11-26 01:18:21.892524] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:59.006 [2024-11-26 01:18:21.892804] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:59.006 [2024-11-26 01:18:21.892823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.006 [2024-11-26 01:18:21.892833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:59.006 [2024-11-26 01:18:21.892872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:32:59.006 [2024-11-26 01:18:21.892881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.006 [2024-11-26 01:18:21.893202] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:59.006 [2024-11-26 01:18:21.893230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.006 [2024-11-26 01:18:21.893239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:59.006 [2024-11-26 01:18:21.893253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:59.006 [2024-11-26 01:18:21.893269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.006 [2024-11-26 01:18:21.893323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.007 [2024-11-26 01:18:21.893333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:59.007 [2024-11-26 01:18:21.893344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:59.007 [2024-11-26 01:18:21.893352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.007 [2024-11-26 01:18:21.893606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.007 [2024-11-26 01:18:21.893629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:59.007 [2024-11-26 01:18:21.893639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:32:59.007 [2024-11-26 01:18:21.893649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.007 [2024-11-26 01:18:21.893733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.007 [2024-11-26 01:18:21.893744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:59.007 [2024-11-26 01:18:21.893752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:32:59.007 [2024-11-26 01:18:21.893760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.007 [2024-11-26 01:18:21.893785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.007 [2024-11-26 01:18:21.893794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:59.007 [2024-11-26 01:18:21.893802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:59.007 [2024-11-26 01:18:21.893810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.007 [2024-11-26 01:18:21.893837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:59.007 [2024-11-26 01:18:21.896024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.007 [2024-11-26 01:18:21.896062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:59.007 [2024-11-26 01:18:21.896072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.192 ms 00:32:59.007 [2024-11-26 01:18:21.896080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.007 [2024-11-26 01:18:21.896121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.007 [2024-11-26 01:18:21.896129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:59.007 [2024-11-26 01:18:21.896137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:59.007 [2024-11-26 01:18:21.896145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.007 [2024-11-26 01:18:21.896200] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:59.007 [2024-11-26 01:18:21.896222] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:59.007 [2024-11-26 01:18:21.896259] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:59.007 [2024-11-26 01:18:21.896282] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:59.007 [2024-11-26 01:18:21.896388] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:59.007 [2024-11-26 01:18:21.896404] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:59.007 [2024-11-26 01:18:21.896415] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:59.007 [2024-11-26 01:18:21.896431] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:59.007 [2024-11-26 01:18:21.896440] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:59.007 [2024-11-26 01:18:21.896449] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:59.007 [2024-11-26 01:18:21.896456] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:59.007 [2024-11-26 01:18:21.896464] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:59.007 [2024-11-26 01:18:21.896471] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:59.007 [2024-11-26 01:18:21.896478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.007 [2024-11-26 01:18:21.896486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:59.007 [2024-11-26 01:18:21.896493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:32:59.007 [2024-11-26 01:18:21.896501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.007 [2024-11-26 01:18:21.896583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.007 [2024-11-26 01:18:21.896595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:59.007 [2024-11-26 01:18:21.896602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:59.007 [2024-11-26 01:18:21.896610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.007 [2024-11-26 01:18:21.896723] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:59.007 [2024-11-26 01:18:21.896737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:59.007 [2024-11-26 01:18:21.896747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:59.007 [2024-11-26 01:18:21.896757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:59.007 [2024-11-26 01:18:21.896766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:59.007 [2024-11-26 01:18:21.896774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:59.007 [2024-11-26 01:18:21.896782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:59.007 [2024-11-26 01:18:21.896791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:59.007 [2024-11-26 01:18:21.896807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:59.007 [2024-11-26 01:18:21.896816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:59.007 [2024-11-26 01:18:21.896823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:59.007 [2024-11-26 01:18:21.896831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:59.007 [2024-11-26 01:18:21.896862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:59.007 [2024-11-26 01:18:21.896871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:59.007 [2024-11-26 01:18:21.896882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:59.007 [2024-11-26 01:18:21.896890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:59.007 [2024-11-26 01:18:21.896898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:59.007 [2024-11-26 01:18:21.896910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:59.007 [2024-11-26 01:18:21.896918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:59.007 [2024-11-26 01:18:21.896927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:59.007 [2024-11-26 01:18:21.896935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:59.007 [2024-11-26 01:18:21.896942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:59.007 [2024-11-26 01:18:21.896950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:59.007 [2024-11-26 01:18:21.896959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:59.007 [2024-11-26 01:18:21.896967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:59.007 [2024-11-26 01:18:21.896975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:59.007 [2024-11-26 01:18:21.896983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:59.007 [2024-11-26 01:18:21.896990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:59.007 [2024-11-26 01:18:21.896998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:59.007 [2024-11-26 01:18:21.897007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:59.007 [2024-11-26 01:18:21.897014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:59.007 [2024-11-26 01:18:21.897022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:59.007 [2024-11-26 01:18:21.897030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:59.007 [2024-11-26 01:18:21.897043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:59.007 [2024-11-26 01:18:21.897051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:59.007 [2024-11-26 01:18:21.897058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:59.008 [2024-11-26 01:18:21.897064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:59.008 [2024-11-26 01:18:21.897071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:59.008 [2024-11-26 01:18:21.897078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:59.008 [2024-11-26 01:18:21.897084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:59.008 [2024-11-26 01:18:21.897091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:59.008 [2024-11-26 01:18:21.897098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:59.008 [2024-11-26 01:18:21.897106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:59.008 [2024-11-26 01:18:21.897113] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:59.008 [2024-11-26 01:18:21.897125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:59.008 [2024-11-26 01:18:21.897135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:59.008 [2024-11-26 01:18:21.897144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:59.008 [2024-11-26 01:18:21.897156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:59.008 [2024-11-26 01:18:21.897162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:59.008 [2024-11-26 01:18:21.897172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:59.008 [2024-11-26 01:18:21.897179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:59.008 [2024-11-26 01:18:21.897186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:59.008 [2024-11-26 01:18:21.897193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:59.008 [2024-11-26 01:18:21.897201] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:59.008 [2024-11-26 01:18:21.897211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:59.008 [2024-11-26 01:18:21.897219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:59.008 [2024-11-26 01:18:21.897226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:59.008 [2024-11-26 01:18:21.897233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:59.008 [2024-11-26 01:18:21.897241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:59.008 [2024-11-26 01:18:21.897248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:59.008 [2024-11-26 01:18:21.897255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:59.008 [2024-11-26 01:18:21.897262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:59.008 [2024-11-26 01:18:21.897268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:59.008 [2024-11-26 01:18:21.897276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:59.008 [2024-11-26 01:18:21.897283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:59.008 [2024-11-26 01:18:21.897292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:59.008 [2024-11-26 01:18:21.897298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:59.008 [2024-11-26 01:18:21.897306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:59.008 [2024-11-26 01:18:21.897313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:59.008 [2024-11-26 01:18:21.897320] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:59.008 [2024-11-26 01:18:21.897328] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:59.008 [2024-11-26 01:18:21.897341] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:59.008 [2024-11-26 01:18:21.897348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:59.008 [2024-11-26 01:18:21.897355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:59.008 [2024-11-26 01:18:21.897362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:59.008 [2024-11-26 01:18:21.897369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.008 [2024-11-26 01:18:21.897380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:59.008 [2024-11-26 01:18:21.897388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:32:59.008 [2024-11-26 01:18:21.897398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.008 [2024-11-26 01:18:21.907906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.008 [2024-11-26 01:18:21.907956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:59.008 [2024-11-26 01:18:21.907969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.462 ms 00:32:59.008 [2024-11-26 01:18:21.907983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.008 [2024-11-26 01:18:21.908075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.008 [2024-11-26 01:18:21.908086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:59.008 [2024-11-26 01:18:21.908095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:59.008 [2024-11-26 01:18:21.908107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.271 [2024-11-26 01:18:21.928638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.271 [2024-11-26 01:18:21.928716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:59.271 [2024-11-26 01:18:21.928733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.470 ms 00:32:59.271 [2024-11-26 01:18:21.928745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.271 [2024-11-26 01:18:21.928802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.271 [2024-11-26 01:18:21.928817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:59.271 [2024-11-26 01:18:21.928830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:59.271 [2024-11-26 01:18:21.928869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.271 [2024-11-26 01:18:21.929021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.271 [2024-11-26 01:18:21.929041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:59.271 [2024-11-26 01:18:21.929054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:32:59.272 [2024-11-26 01:18:21.929066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.929245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.929259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:59.272 [2024-11-26 01:18:21.929271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:32:59.272 [2024-11-26 01:18:21.929282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.938046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.938106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:59.272 [2024-11-26 01:18:21.938155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.736 ms 00:32:59.272 [2024-11-26 01:18:21.938168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.938332] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:59.272 [2024-11-26 01:18:21.938351] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:59.272 [2024-11-26 01:18:21.938366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.938380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:59.272 [2024-11-26 01:18:21.938393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:32:59.272 [2024-11-26 01:18:21.938410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.950792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.950857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:59.272 [2024-11-26 01:18:21.950870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.359 ms 00:32:59.272 [2024-11-26 01:18:21.950878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.951029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.951039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:59.272 [2024-11-26 01:18:21.951052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:32:59.272 [2024-11-26 01:18:21.951059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.951111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.951130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:59.272 [2024-11-26 01:18:21.951138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:59.272 [2024-11-26 01:18:21.951146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.951464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.951498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:59.272 [2024-11-26 01:18:21.951508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:32:59.272 [2024-11-26 01:18:21.951515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.951532] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:59.272 [2024-11-26 01:18:21.951542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.951553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:59.272 [2024-11-26 01:18:21.951561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:59.272 [2024-11-26 01:18:21.951568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.960982] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:59.272 [2024-11-26 01:18:21.961147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.961163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:59.272 [2024-11-26 01:18:21.961174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.561 ms 00:32:59.272 [2024-11-26 01:18:21.961185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.963665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.963702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:59.272 [2024-11-26 01:18:21.963714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:32:59.272 [2024-11-26 01:18:21.963721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.963822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.963833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:59.272 [2024-11-26 01:18:21.963861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:59.272 [2024-11-26 01:18:21.963874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.963899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.963914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:59.272 [2024-11-26 01:18:21.963922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:59.272 [2024-11-26 01:18:21.963936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.963972] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:59.272 [2024-11-26 01:18:21.963982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.963989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:59.272 [2024-11-26 01:18:21.963997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:59.272 [2024-11-26 01:18:21.964009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.970308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.970503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:59.272 [2024-11-26 01:18:21.970523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.277 ms 00:32:59.272 [2024-11-26 01:18:21.970532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.970615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:59.272 [2024-11-26 01:18:21.970625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:59.272 [2024-11-26 01:18:21.970638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:32:59.272 [2024-11-26 01:18:21.970645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:59.272 [2024-11-26 01:18:21.971816] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 79.071 ms, result 0 00:33:00.214  [2024-11-26T01:18:24.076Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-26T01:18:25.019Z] Copying: 33/1024 [MB] (18 MBps) [2024-11-26T01:18:26.047Z] Copying: 46/1024 [MB] (12 MBps) [2024-11-26T01:18:26.992Z] Copying: 61/1024 [MB] (15 MBps) [2024-11-26T01:18:28.374Z] Copying: 72/1024 [MB] (11 MBps) [2024-11-26T01:18:29.310Z] Copying: 101/1024 [MB] (28 MBps) [2024-11-26T01:18:30.243Z] Copying: 129/1024 [MB] (27 MBps) [2024-11-26T01:18:31.176Z] Copying: 182/1024 [MB] (53 MBps) [2024-11-26T01:18:32.110Z] Copying: 237/1024 [MB] (54 MBps) [2024-11-26T01:18:33.042Z] Copying: 280/1024 [MB] (42 MBps) [2024-11-26T01:18:34.417Z] Copying: 329/1024 [MB] (48 MBps) [2024-11-26T01:18:35.348Z] Copying: 379/1024 [MB] (50 MBps) [2024-11-26T01:18:36.281Z] Copying: 423/1024 [MB] (43 MBps) [2024-11-26T01:18:37.214Z] Copying: 468/1024 [MB] (44 MBps) [2024-11-26T01:18:38.148Z] Copying: 514/1024 [MB] (46 MBps) [2024-11-26T01:18:39.084Z] Copying: 559/1024 [MB] (44 MBps) [2024-11-26T01:18:40.054Z] Copying: 594/1024 [MB] (35 MBps) [2024-11-26T01:18:40.994Z] Copying: 614/1024 [MB] (19 MBps) [2024-11-26T01:18:42.373Z] Copying: 643/1024 [MB] (29 MBps) [2024-11-26T01:18:43.308Z] Copying: 672/1024 [MB] (28 MBps) [2024-11-26T01:18:44.250Z] Copying: 718/1024 [MB] (45 MBps) [2024-11-26T01:18:45.192Z] Copying: 740/1024 [MB] (22 MBps) [2024-11-26T01:18:46.137Z] Copying: 762/1024 [MB] (21 MBps) [2024-11-26T01:18:47.078Z] Copying: 779/1024 [MB] (16 MBps) [2024-11-26T01:18:48.027Z] Copying: 807/1024 [MB] (27 MBps) [2024-11-26T01:18:49.416Z] Copying: 835/1024 [MB] (28 MBps) [2024-11-26T01:18:49.989Z] Copying: 854/1024 [MB] (19 MBps) [2024-11-26T01:18:51.376Z] Copying: 881/1024 [MB] (27 MBps) [2024-11-26T01:18:52.319Z] Copying: 904/1024 [MB] (22 MBps) [2024-11-26T01:18:53.261Z] Copying: 924/1024 [MB] (20 MBps) [2024-11-26T01:18:54.204Z] Copying: 946/1024 [MB] (22 MBps) [2024-11-26T01:18:55.146Z] Copying: 958/1024 [MB] (11 MBps) [2024-11-26T01:18:56.089Z] Copying: 969/1024 [MB] (10 MBps) [2024-11-26T01:18:57.034Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-26T01:18:58.422Z] Copying: 991/1024 [MB] (11 MBps) [2024-11-26T01:18:58.994Z] Copying: 1002/1024 [MB] (11 MBps) [2024-11-26T01:19:00.375Z] Copying: 1015/1024 [MB] (13 MBps) [2024-11-26T01:19:00.375Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-26 01:18:59.946198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.458 [2024-11-26 01:18:59.946308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:37.458 [2024-11-26 01:18:59.946364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:37.458 [2024-11-26 01:18:59.946383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.458 [2024-11-26 01:18:59.947487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:37.458 [2024-11-26 01:18:59.949597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.458 [2024-11-26 01:18:59.949686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:37.458 [2024-11-26 01:18:59.949732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:33:37.458 [2024-11-26 01:18:59.949757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.459 [2024-11-26 01:18:59.956733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.459 [2024-11-26 01:18:59.956824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:37.459 [2024-11-26 01:18:59.956887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.141 ms 00:33:37.459 [2024-11-26 01:18:59.956908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.459 [2024-11-26 01:18:59.956947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.459 [2024-11-26 01:18:59.956998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:37.459 [2024-11-26 01:18:59.957017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:37.459 [2024-11-26 01:18:59.957032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.459 [2024-11-26 01:18:59.957102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.459 [2024-11-26 01:18:59.957124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:37.459 [2024-11-26 01:18:59.957139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:37.459 [2024-11-26 01:18:59.957253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.459 [2024-11-26 01:18:59.957281] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:37.459 [2024-11-26 01:18:59.957301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128512 / 261120 wr_cnt: 1 state: open 00:33:37.459 [2024-11-26 01:18:59.957328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:37.459 [2024-11-26 01:18:59.957880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.957998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.958004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.958010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.958015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.958021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.958027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:37.460 [2024-11-26 01:18:59.958040] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:37.460 [2024-11-26 01:18:59.958046] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7b160b2-bbfe-4468-82dc-525b9e97e52c 00:33:37.460 [2024-11-26 01:18:59.958052] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128512 00:33:37.460 [2024-11-26 01:18:59.958058] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128544 00:33:37.460 [2024-11-26 01:18:59.958063] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128512 00:33:37.460 [2024-11-26 01:18:59.958069] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:33:37.460 [2024-11-26 01:18:59.958077] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:37.460 [2024-11-26 01:18:59.958083] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:37.460 [2024-11-26 01:18:59.958089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:37.460 [2024-11-26 01:18:59.958095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:37.460 [2024-11-26 01:18:59.958100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:37.460 [2024-11-26 01:18:59.958106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.460 [2024-11-26 01:18:59.958112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:37.460 [2024-11-26 01:18:59.958118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:33:37.460 [2024-11-26 01:18:59.958123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.959375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.460 [2024-11-26 01:18:59.959390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:37.460 [2024-11-26 01:18:59.959401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:33:37.460 [2024-11-26 01:18:59.959407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.959474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:37.460 [2024-11-26 01:18:59.959481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:37.460 [2024-11-26 01:18:59.959487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:33:37.460 [2024-11-26 01:18:59.959496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.963662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.963688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:37.460 [2024-11-26 01:18:59.963695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.963701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.963754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.963761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:37.460 [2024-11-26 01:18:59.963767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.963772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.963805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.963814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:37.460 [2024-11-26 01:18:59.963823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.963833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.963856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.963863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:37.460 [2024-11-26 01:18:59.963869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.963875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.971190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.971227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:37.460 [2024-11-26 01:18:59.971235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.971243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.977609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.977643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:37.460 [2024-11-26 01:18:59.977651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.977657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.977675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.977682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:37.460 [2024-11-26 01:18:59.977692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.977698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.977732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.977739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:37.460 [2024-11-26 01:18:59.977745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.977750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.977794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.977801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:37.460 [2024-11-26 01:18:59.977811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.977819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.977835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.977851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:37.460 [2024-11-26 01:18:59.977857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.977862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.977890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.977901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:37.460 [2024-11-26 01:18:59.977907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.977915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.977946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:37.460 [2024-11-26 01:18:59.977954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:37.460 [2024-11-26 01:18:59.977960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:37.460 [2024-11-26 01:18:59.977965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:37.460 [2024-11-26 01:18:59.978059] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 34.030 ms, result 0 00:33:38.113 00:33:38.113 00:33:38.113 01:19:00 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:33:38.113 [2024-11-26 01:19:00.860363] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:33:38.113 [2024-11-26 01:19:00.860501] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98724 ] 00:33:38.113 [2024-11-26 01:19:00.996280] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:38.113 [2024-11-26 01:19:01.023504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.373 [2024-11-26 01:19:01.047173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:38.373 [2024-11-26 01:19:01.132473] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:38.373 [2024-11-26 01:19:01.132525] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:38.373 [2024-11-26 01:19:01.279254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.373 [2024-11-26 01:19:01.279284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:38.373 [2024-11-26 01:19:01.279293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:38.373 [2024-11-26 01:19:01.279299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.373 [2024-11-26 01:19:01.279331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.373 [2024-11-26 01:19:01.279341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:38.373 [2024-11-26 01:19:01.279347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:33:38.373 [2024-11-26 01:19:01.279354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.373 [2024-11-26 01:19:01.279371] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:38.373 [2024-11-26 01:19:01.279538] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:38.373 [2024-11-26 01:19:01.279553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.373 [2024-11-26 01:19:01.279561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:38.373 [2024-11-26 01:19:01.279568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:33:38.373 [2024-11-26 01:19:01.279573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.373 [2024-11-26 01:19:01.279753] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:38.373 [2024-11-26 01:19:01.279768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.373 [2024-11-26 01:19:01.279778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:38.373 [2024-11-26 01:19:01.279787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:33:38.373 [2024-11-26 01:19:01.279795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.373 [2024-11-26 01:19:01.279862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.373 [2024-11-26 01:19:01.279870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:38.373 [2024-11-26 01:19:01.279880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:38.373 [2024-11-26 01:19:01.279886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.373 [2024-11-26 01:19:01.280068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.373 [2024-11-26 01:19:01.280076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:38.373 [2024-11-26 01:19:01.280082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:33:38.373 [2024-11-26 01:19:01.280094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.373 [2024-11-26 01:19:01.280149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.373 [2024-11-26 01:19:01.280156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:38.373 [2024-11-26 01:19:01.280164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:38.373 [2024-11-26 01:19:01.280169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.373 [2024-11-26 01:19:01.280185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.373 [2024-11-26 01:19:01.280191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:38.373 [2024-11-26 01:19:01.280197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:38.373 [2024-11-26 01:19:01.280202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.373 [2024-11-26 01:19:01.280217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:38.373 [2024-11-26 01:19:01.281537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.374 [2024-11-26 01:19:01.281550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:38.374 [2024-11-26 01:19:01.281557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.322 ms 00:33:38.374 [2024-11-26 01:19:01.281564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.374 [2024-11-26 01:19:01.281593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.374 [2024-11-26 01:19:01.281600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:38.374 [2024-11-26 01:19:01.281607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:38.374 [2024-11-26 01:19:01.281613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.374 [2024-11-26 01:19:01.281629] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:38.374 [2024-11-26 01:19:01.281643] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:38.374 [2024-11-26 01:19:01.281668] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:38.374 [2024-11-26 01:19:01.281679] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:38.374 [2024-11-26 01:19:01.281754] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:38.374 [2024-11-26 01:19:01.281762] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:38.374 [2024-11-26 01:19:01.281770] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:38.374 [2024-11-26 01:19:01.281783] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:38.374 [2024-11-26 01:19:01.281789] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:38.374 [2024-11-26 01:19:01.281798] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:38.374 [2024-11-26 01:19:01.281803] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:38.374 [2024-11-26 01:19:01.281808] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:38.374 [2024-11-26 01:19:01.281815] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:38.374 [2024-11-26 01:19:01.281821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.374 [2024-11-26 01:19:01.281827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:38.374 [2024-11-26 01:19:01.281832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:33:38.374 [2024-11-26 01:19:01.281837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.374 [2024-11-26 01:19:01.281912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.374 [2024-11-26 01:19:01.281920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:38.374 [2024-11-26 01:19:01.281926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:38.374 [2024-11-26 01:19:01.281931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.374 [2024-11-26 01:19:01.282002] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:38.374 [2024-11-26 01:19:01.282009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:38.374 [2024-11-26 01:19:01.282015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:38.374 [2024-11-26 01:19:01.282022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:38.374 [2024-11-26 01:19:01.282033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:38.374 [2024-11-26 01:19:01.282045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:38.374 [2024-11-26 01:19:01.282054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:38.374 [2024-11-26 01:19:01.282064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:38.374 [2024-11-26 01:19:01.282069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:38.374 [2024-11-26 01:19:01.282074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:38.374 [2024-11-26 01:19:01.282079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:38.374 [2024-11-26 01:19:01.282084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:38.374 [2024-11-26 01:19:01.282088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:38.374 [2024-11-26 01:19:01.282098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:38.374 [2024-11-26 01:19:01.282103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:38.374 [2024-11-26 01:19:01.282114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:38.374 [2024-11-26 01:19:01.282124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:38.374 [2024-11-26 01:19:01.282130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:38.374 [2024-11-26 01:19:01.282174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:38.374 [2024-11-26 01:19:01.282178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:38.374 [2024-11-26 01:19:01.282188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:38.374 [2024-11-26 01:19:01.282192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:38.374 [2024-11-26 01:19:01.282203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:38.374 [2024-11-26 01:19:01.282208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:38.374 [2024-11-26 01:19:01.282218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:38.374 [2024-11-26 01:19:01.282223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:38.374 [2024-11-26 01:19:01.282227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:38.374 [2024-11-26 01:19:01.282232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:38.374 [2024-11-26 01:19:01.282237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:38.374 [2024-11-26 01:19:01.282245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:38.374 [2024-11-26 01:19:01.282255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:38.374 [2024-11-26 01:19:01.282259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282264] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:38.374 [2024-11-26 01:19:01.282269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:38.374 [2024-11-26 01:19:01.282276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:38.374 [2024-11-26 01:19:01.282281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:38.374 [2024-11-26 01:19:01.282287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:38.375 [2024-11-26 01:19:01.282292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:38.375 [2024-11-26 01:19:01.282297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:38.375 [2024-11-26 01:19:01.282301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:38.375 [2024-11-26 01:19:01.282306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:38.375 [2024-11-26 01:19:01.282311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:38.375 [2024-11-26 01:19:01.282317] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:38.375 [2024-11-26 01:19:01.282324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:38.375 [2024-11-26 01:19:01.282334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:38.375 [2024-11-26 01:19:01.282340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:38.375 [2024-11-26 01:19:01.282345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:38.375 [2024-11-26 01:19:01.282350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:38.375 [2024-11-26 01:19:01.282355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:38.375 [2024-11-26 01:19:01.282361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:38.375 [2024-11-26 01:19:01.282366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:38.375 [2024-11-26 01:19:01.282371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:38.375 [2024-11-26 01:19:01.282376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:38.375 [2024-11-26 01:19:01.282381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:38.375 [2024-11-26 01:19:01.282386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:38.375 [2024-11-26 01:19:01.282391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:38.375 [2024-11-26 01:19:01.282397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:38.375 [2024-11-26 01:19:01.282402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:38.375 [2024-11-26 01:19:01.282408] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:38.375 [2024-11-26 01:19:01.282414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:38.375 [2024-11-26 01:19:01.282421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:38.375 [2024-11-26 01:19:01.282426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:38.375 [2024-11-26 01:19:01.282431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:38.375 [2024-11-26 01:19:01.282437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:38.375 [2024-11-26 01:19:01.282442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.375 [2024-11-26 01:19:01.282447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:38.375 [2024-11-26 01:19:01.282453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:33:38.375 [2024-11-26 01:19:01.282459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.287702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.287724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:38.635 [2024-11-26 01:19:01.287731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.213 ms 00:33:38.635 [2024-11-26 01:19:01.287737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.287797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.287803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:38.635 [2024-11-26 01:19:01.287809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:38.635 [2024-11-26 01:19:01.287818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.303716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.303752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:38.635 [2024-11-26 01:19:01.303766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.840 ms 00:33:38.635 [2024-11-26 01:19:01.303775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.303826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.303837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:38.635 [2024-11-26 01:19:01.303864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:38.635 [2024-11-26 01:19:01.303873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.303977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.303992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:38.635 [2024-11-26 01:19:01.304002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:33:38.635 [2024-11-26 01:19:01.304010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.304143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.304159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:38.635 [2024-11-26 01:19:01.304174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:33:38.635 [2024-11-26 01:19:01.304183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.309416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.309449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:38.635 [2024-11-26 01:19:01.309462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.211 ms 00:33:38.635 [2024-11-26 01:19:01.309470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.309565] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:38.635 [2024-11-26 01:19:01.309584] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:38.635 [2024-11-26 01:19:01.309594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.309602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:38.635 [2024-11-26 01:19:01.309612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:38.635 [2024-11-26 01:19:01.309622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.322081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.322104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:38.635 [2024-11-26 01:19:01.322114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.443 ms 00:33:38.635 [2024-11-26 01:19:01.322122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.322242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.322251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:38.635 [2024-11-26 01:19:01.322263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:33:38.635 [2024-11-26 01:19:01.322272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.322311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.322323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:38.635 [2024-11-26 01:19:01.322331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:38.635 [2024-11-26 01:19:01.322338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.322632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.322648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:38.635 [2024-11-26 01:19:01.322656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:33:38.635 [2024-11-26 01:19:01.322663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.322676] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:38.635 [2024-11-26 01:19:01.322685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.322697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:38.635 [2024-11-26 01:19:01.322706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:38.635 [2024-11-26 01:19:01.322713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.328865] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:38.635 [2024-11-26 01:19:01.328959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.328970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:38.635 [2024-11-26 01:19:01.328977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.231 ms 00:33:38.635 [2024-11-26 01:19:01.328985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.330855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.635 [2024-11-26 01:19:01.330868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:38.635 [2024-11-26 01:19:01.330875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:33:38.635 [2024-11-26 01:19:01.330881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.635 [2024-11-26 01:19:01.330921] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:33:38.635 [2024-11-26 01:19:01.331348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.636 [2024-11-26 01:19:01.331367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:38.636 [2024-11-26 01:19:01.331376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:33:38.636 [2024-11-26 01:19:01.331381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.636 [2024-11-26 01:19:01.331398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.636 [2024-11-26 01:19:01.331404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:38.636 [2024-11-26 01:19:01.331409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:38.636 [2024-11-26 01:19:01.331414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.636 [2024-11-26 01:19:01.331436] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:38.636 [2024-11-26 01:19:01.331443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.636 [2024-11-26 01:19:01.331449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:38.636 [2024-11-26 01:19:01.331454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:38.636 [2024-11-26 01:19:01.331461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.636 [2024-11-26 01:19:01.334519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.636 [2024-11-26 01:19:01.334543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:38.636 [2024-11-26 01:19:01.334551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:33:38.636 [2024-11-26 01:19:01.334558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.636 [2024-11-26 01:19:01.334610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:38.636 [2024-11-26 01:19:01.334617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:38.636 [2024-11-26 01:19:01.334623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:38.636 [2024-11-26 01:19:01.334629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:38.636 [2024-11-26 01:19:01.335305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 55.774 ms, result 0 00:33:39.582  [2024-11-26T01:19:03.886Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-26T01:19:04.831Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-26T01:19:05.774Z] Copying: 59/1024 [MB] (14 MBps) [2024-11-26T01:19:06.718Z] Copying: 74/1024 [MB] (15 MBps) [2024-11-26T01:19:07.661Z] Copying: 91/1024 [MB] (17 MBps) [2024-11-26T01:19:08.600Z] Copying: 109/1024 [MB] (17 MBps) [2024-11-26T01:19:09.541Z] Copying: 131/1024 [MB] (22 MBps) [2024-11-26T01:19:10.486Z] Copying: 151/1024 [MB] (19 MBps) [2024-11-26T01:19:11.876Z] Copying: 173/1024 [MB] (22 MBps) [2024-11-26T01:19:12.823Z] Copying: 199/1024 [MB] (26 MBps) [2024-11-26T01:19:13.769Z] Copying: 212/1024 [MB] (12 MBps) [2024-11-26T01:19:14.714Z] Copying: 223/1024 [MB] (10 MBps) [2024-11-26T01:19:15.660Z] Copying: 233/1024 [MB] (10 MBps) [2024-11-26T01:19:16.606Z] Copying: 244/1024 [MB] (10 MBps) [2024-11-26T01:19:17.549Z] Copying: 254/1024 [MB] (10 MBps) [2024-11-26T01:19:18.493Z] Copying: 265/1024 [MB] (10 MBps) [2024-11-26T01:19:19.881Z] Copying: 287/1024 [MB] (22 MBps) [2024-11-26T01:19:20.826Z] Copying: 304/1024 [MB] (16 MBps) [2024-11-26T01:19:21.773Z] Copying: 315/1024 [MB] (10 MBps) [2024-11-26T01:19:22.719Z] Copying: 329/1024 [MB] (14 MBps) [2024-11-26T01:19:23.663Z] Copying: 345/1024 [MB] (15 MBps) [2024-11-26T01:19:24.609Z] Copying: 362/1024 [MB] (17 MBps) [2024-11-26T01:19:25.553Z] Copying: 375/1024 [MB] (12 MBps) [2024-11-26T01:19:26.497Z] Copying: 386/1024 [MB] (10 MBps) [2024-11-26T01:19:27.886Z] Copying: 397/1024 [MB] (10 MBps) [2024-11-26T01:19:28.827Z] Copying: 407/1024 [MB] (10 MBps) [2024-11-26T01:19:29.767Z] Copying: 429/1024 [MB] (21 MBps) [2024-11-26T01:19:30.708Z] Copying: 442/1024 [MB] (13 MBps) [2024-11-26T01:19:31.648Z] Copying: 455/1024 [MB] (12 MBps) [2024-11-26T01:19:32.590Z] Copying: 468/1024 [MB] (13 MBps) [2024-11-26T01:19:33.535Z] Copying: 480/1024 [MB] (12 MBps) [2024-11-26T01:19:34.482Z] Copying: 501/1024 [MB] (20 MBps) [2024-11-26T01:19:35.502Z] Copying: 520/1024 [MB] (18 MBps) [2024-11-26T01:19:36.889Z] Copying: 534/1024 [MB] (14 MBps) [2024-11-26T01:19:37.834Z] Copying: 550/1024 [MB] (15 MBps) [2024-11-26T01:19:38.779Z] Copying: 563/1024 [MB] (12 MBps) [2024-11-26T01:19:39.738Z] Copying: 575/1024 [MB] (11 MBps) [2024-11-26T01:19:40.684Z] Copying: 585/1024 [MB] (10 MBps) [2024-11-26T01:19:41.629Z] Copying: 596/1024 [MB] (10 MBps) [2024-11-26T01:19:42.571Z] Copying: 607/1024 [MB] (10 MBps) [2024-11-26T01:19:43.513Z] Copying: 627/1024 [MB] (20 MBps) [2024-11-26T01:19:44.897Z] Copying: 646/1024 [MB] (18 MBps) [2024-11-26T01:19:45.837Z] Copying: 665/1024 [MB] (18 MBps) [2024-11-26T01:19:46.781Z] Copying: 679/1024 [MB] (14 MBps) [2024-11-26T01:19:47.727Z] Copying: 699/1024 [MB] (19 MBps) [2024-11-26T01:19:48.671Z] Copying: 721/1024 [MB] (22 MBps) [2024-11-26T01:19:49.612Z] Copying: 740/1024 [MB] (18 MBps) [2024-11-26T01:19:50.556Z] Copying: 759/1024 [MB] (19 MBps) [2024-11-26T01:19:51.502Z] Copying: 781/1024 [MB] (21 MBps) [2024-11-26T01:19:52.883Z] Copying: 801/1024 [MB] (20 MBps) [2024-11-26T01:19:53.828Z] Copying: 827/1024 [MB] (26 MBps) [2024-11-26T01:19:54.773Z] Copying: 855/1024 [MB] (27 MBps) [2024-11-26T01:19:55.717Z] Copying: 871/1024 [MB] (15 MBps) [2024-11-26T01:19:56.661Z] Copying: 885/1024 [MB] (14 MBps) [2024-11-26T01:19:57.605Z] Copying: 904/1024 [MB] (19 MBps) [2024-11-26T01:19:58.550Z] Copying: 916/1024 [MB] (11 MBps) [2024-11-26T01:19:59.492Z] Copying: 930/1024 [MB] (13 MBps) [2024-11-26T01:20:00.874Z] Copying: 941/1024 [MB] (11 MBps) [2024-11-26T01:20:01.818Z] Copying: 958/1024 [MB] (16 MBps) [2024-11-26T01:20:02.765Z] Copying: 975/1024 [MB] (17 MBps) [2024-11-26T01:20:03.709Z] Copying: 990/1024 [MB] (14 MBps) [2024-11-26T01:20:04.654Z] Copying: 1005/1024 [MB] (14 MBps) [2024-11-26T01:20:04.915Z] Copying: 1015/1024 [MB] (10 MBps) [2024-11-26T01:20:05.179Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-26 01:20:05.159999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.262 [2024-11-26 01:20:05.160124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:42.262 [2024-11-26 01:20:05.160151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:42.262 [2024-11-26 01:20:05.160169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.262 [2024-11-26 01:20:05.160213] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:42.262 [2024-11-26 01:20:05.161523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.262 [2024-11-26 01:20:05.161604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:42.262 [2024-11-26 01:20:05.161623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:34:42.262 [2024-11-26 01:20:05.161644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.262 [2024-11-26 01:20:05.162122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.262 [2024-11-26 01:20:05.162216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:42.262 [2024-11-26 01:20:05.162244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:34:42.262 [2024-11-26 01:20:05.162260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.262 [2024-11-26 01:20:05.162314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.262 [2024-11-26 01:20:05.162333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:42.262 [2024-11-26 01:20:05.162349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:42.262 [2024-11-26 01:20:05.162365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.262 [2024-11-26 01:20:05.162479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.262 [2024-11-26 01:20:05.162509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:42.262 [2024-11-26 01:20:05.162526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:42.262 [2024-11-26 01:20:05.162542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.262 [2024-11-26 01:20:05.162569] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:42.262 [2024-11-26 01:20:05.162600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:34:42.262 [2024-11-26 01:20:05.162620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.162993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:42.262 [2024-11-26 01:20:05.163191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:42.263 [2024-11-26 01:20:05.163900] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:42.263 [2024-11-26 01:20:05.163924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b7b160b2-bbfe-4468-82dc-525b9e97e52c 00:34:42.263 [2024-11-26 01:20:05.163937] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:34:42.263 [2024-11-26 01:20:05.163948] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2592 00:34:42.263 [2024-11-26 01:20:05.163959] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2560 00:34:42.263 [2024-11-26 01:20:05.163974] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0125 00:34:42.263 [2024-11-26 01:20:05.163984] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:42.263 [2024-11-26 01:20:05.163996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:42.263 [2024-11-26 01:20:05.164008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:42.263 [2024-11-26 01:20:05.164017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:42.263 [2024-11-26 01:20:05.164026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:42.263 [2024-11-26 01:20:05.164037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.263 [2024-11-26 01:20:05.164048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:42.263 [2024-11-26 01:20:05.164059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.469 ms 00:34:42.263 [2024-11-26 01:20:05.164069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.263 [2024-11-26 01:20:05.166750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.263 [2024-11-26 01:20:05.166810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:42.263 [2024-11-26 01:20:05.166825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:34:42.263 [2024-11-26 01:20:05.166837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.263 [2024-11-26 01:20:05.166997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:42.263 [2024-11-26 01:20:05.167011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:42.263 [2024-11-26 01:20:05.167023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:34:42.263 [2024-11-26 01:20:05.167034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.525 [2024-11-26 01:20:05.175575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.525 [2024-11-26 01:20:05.175629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:42.525 [2024-11-26 01:20:05.175641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.525 [2024-11-26 01:20:05.175655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.525 [2024-11-26 01:20:05.175717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.525 [2024-11-26 01:20:05.175727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:42.525 [2024-11-26 01:20:05.175735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.525 [2024-11-26 01:20:05.175743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.525 [2024-11-26 01:20:05.175805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.525 [2024-11-26 01:20:05.175820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:42.525 [2024-11-26 01:20:05.175827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.175835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.175871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.526 [2024-11-26 01:20:05.175880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:42.526 [2024-11-26 01:20:05.175888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.175896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.189615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.526 [2024-11-26 01:20:05.189666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:42.526 [2024-11-26 01:20:05.189676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.189684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.200707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.526 [2024-11-26 01:20:05.200761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:42.526 [2024-11-26 01:20:05.200773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.200781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.200828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.526 [2024-11-26 01:20:05.200854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:42.526 [2024-11-26 01:20:05.200877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.200885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.200921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.526 [2024-11-26 01:20:05.200931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:42.526 [2024-11-26 01:20:05.200939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.200946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.201012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.526 [2024-11-26 01:20:05.201022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:42.526 [2024-11-26 01:20:05.201039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.201046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.201072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.526 [2024-11-26 01:20:05.201080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:42.526 [2024-11-26 01:20:05.201088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.201096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.201136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.526 [2024-11-26 01:20:05.201146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:42.526 [2024-11-26 01:20:05.201155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.201172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.201217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:42.526 [2024-11-26 01:20:05.201239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:42.526 [2024-11-26 01:20:05.201248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:42.526 [2024-11-26 01:20:05.201256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:42.526 [2024-11-26 01:20:05.201389] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 41.383 ms, result 0 00:34:42.788 00:34:42.788 00:34:42.788 01:20:05 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:44.704 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:44.704 01:20:07 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:34:44.704 01:20:07 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:34:44.704 01:20:07 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:44.966 Process with pid 96747 is not found 00:34:44.966 Remove shared memory files 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 96747 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 96747 ']' 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 96747 00:34:44.966 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (96747) - No such process 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 96747 is not found' 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_band_md /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_l2p_l1 /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_l2p_l2 /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_l2p_l2_ctx /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_nvc_md /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_p2l_pool /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_sb /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_sb_shm /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_trim_bitmap /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_trim_log /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_trim_md /dev/hugepages/ftl_b7b160b2-bbfe-4468-82dc-525b9e97e52c_vmap 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:34:44.966 00:34:44.966 real 4m17.058s 00:34:44.966 user 4m6.080s 00:34:44.966 sys 0m11.001s 00:34:44.966 ************************************ 00:34:44.966 END TEST ftl_restore_fast 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:44.966 01:20:07 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:34:44.966 ************************************ 00:34:44.966 01:20:07 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:34:44.966 01:20:07 ftl -- ftl/ftl.sh@14 -- # killprocess 87914 00:34:44.966 01:20:07 ftl -- common/autotest_common.sh@954 -- # '[' -z 87914 ']' 00:34:44.966 01:20:07 ftl -- common/autotest_common.sh@958 -- # kill -0 87914 00:34:44.966 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87914) - No such process 00:34:44.966 Process with pid 87914 is not found 00:34:44.966 01:20:07 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 87914 is not found' 00:34:44.966 01:20:07 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:34:44.966 01:20:07 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=99407 00:34:44.966 01:20:07 ftl -- ftl/ftl.sh@20 -- # waitforlisten 99407 00:34:44.966 01:20:07 ftl -- common/autotest_common.sh@835 -- # '[' -z 99407 ']' 00:34:44.966 01:20:07 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.966 01:20:07 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.966 01:20:07 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.966 01:20:07 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.966 01:20:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:44.966 01:20:07 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:44.966 [2024-11-26 01:20:07.780634] Starting SPDK v25.01-pre git sha1 2a91567e4 / DPDK 24.11.0-rc3 initialization... 00:34:44.966 [2024-11-26 01:20:07.780743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99407 ] 00:34:45.228 [2024-11-26 01:20:07.909661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:45.228 [2024-11-26 01:20:07.936877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:45.228 [2024-11-26 01:20:07.966714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:45.800 01:20:08 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.800 01:20:08 ftl -- common/autotest_common.sh@868 -- # return 0 00:34:45.800 01:20:08 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:34:46.061 nvme0n1 00:34:46.061 01:20:08 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:34:46.061 01:20:08 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:46.061 01:20:08 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:46.346 01:20:09 ftl -- ftl/common.sh@28 -- # stores=7b05f0a4-8340-4ad3-82df-3ec1983080e8 00:34:46.346 01:20:09 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:34:46.346 01:20:09 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7b05f0a4-8340-4ad3-82df-3ec1983080e8 00:34:46.704 01:20:09 ftl -- ftl/ftl.sh@23 -- # killprocess 99407 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 99407 ']' 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@958 -- # kill -0 99407 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@959 -- # uname 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99407 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:46.704 killing process with pid 99407 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99407' 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@973 -- # kill 99407 00:34:46.704 01:20:09 ftl -- common/autotest_common.sh@978 -- # wait 99407 00:34:46.704 01:20:09 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:46.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:46.997 Waiting for block devices as requested 00:34:46.997 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:34:47.258 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:34:47.258 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:34:47.258 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:34:52.548 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:34:52.548 01:20:15 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:34:52.548 Remove shared memory files 00:34:52.548 01:20:15 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:52.548 01:20:15 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:34:52.548 01:20:15 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:34:52.548 01:20:15 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:34:52.548 01:20:15 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:52.548 01:20:15 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:34:52.548 00:34:52.548 real 17m19.389s 00:34:52.548 user 19m4.023s 00:34:52.548 sys 1m18.749s 00:34:52.548 01:20:15 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:52.548 ************************************ 00:34:52.548 END TEST ftl 00:34:52.548 ************************************ 00:34:52.548 01:20:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:52.548 01:20:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:34:52.548 01:20:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:34:52.548 01:20:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:34:52.548 01:20:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:34:52.548 01:20:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:34:52.548 01:20:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:34:52.548 01:20:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:34:52.548 01:20:15 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:34:52.548 01:20:15 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:34:52.548 01:20:15 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:34:52.548 01:20:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:52.548 01:20:15 -- common/autotest_common.sh@10 -- # set +x 00:34:52.548 01:20:15 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:34:52.548 01:20:15 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:34:52.548 01:20:15 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:34:52.548 01:20:15 -- common/autotest_common.sh@10 -- # set +x 00:34:53.935 INFO: APP EXITING 00:34:53.935 INFO: killing all VMs 00:34:53.935 INFO: killing vhost app 00:34:53.935 INFO: EXIT DONE 00:34:54.507 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:54.767 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:54.767 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:54.767 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:54.767 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:55.028 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:55.601 Cleaning 00:34:55.601 Removing: /var/run/dpdk/spdk0/config 00:34:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:34:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:34:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:34:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:34:55.601 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:34:55.601 Removing: /var/run/dpdk/spdk0/hugepage_info 00:34:55.601 Removing: /var/run/dpdk/spdk0 00:34:55.601 Removing: /var/run/dpdk/spdk_pid70837 00:34:55.601 Removing: /var/run/dpdk/spdk_pid70995 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71197 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71284 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71307 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71419 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71431 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71614 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71687 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71771 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71867 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71947 00:34:55.601 Removing: /var/run/dpdk/spdk_pid71987 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72018 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72088 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72172 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72592 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72634 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72686 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72696 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72749 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72765 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72823 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72839 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72881 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72899 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72941 00:34:55.601 Removing: /var/run/dpdk/spdk_pid72959 00:34:55.601 Removing: /var/run/dpdk/spdk_pid73086 00:34:55.601 Removing: /var/run/dpdk/spdk_pid73117 00:34:55.601 Removing: /var/run/dpdk/spdk_pid73206 00:34:55.601 Removing: /var/run/dpdk/spdk_pid73367 00:34:55.601 Removing: /var/run/dpdk/spdk_pid73440 00:34:55.601 Removing: /var/run/dpdk/spdk_pid73460 00:34:55.601 Removing: /var/run/dpdk/spdk_pid73884 00:34:55.601 Removing: /var/run/dpdk/spdk_pid73971 00:34:55.601 Removing: /var/run/dpdk/spdk_pid74077 00:34:55.601 Removing: /var/run/dpdk/spdk_pid74121 00:34:55.601 Removing: /var/run/dpdk/spdk_pid74141 00:34:55.601 Removing: /var/run/dpdk/spdk_pid74225 00:34:55.601 Removing: /var/run/dpdk/spdk_pid74827 00:34:55.601 Removing: /var/run/dpdk/spdk_pid74858 00:34:55.601 Removing: /var/run/dpdk/spdk_pid75317 00:34:55.602 Removing: /var/run/dpdk/spdk_pid75410 00:34:55.602 Removing: /var/run/dpdk/spdk_pid75513 00:34:55.602 Removing: /var/run/dpdk/spdk_pid75550 00:34:55.602 Removing: /var/run/dpdk/spdk_pid75581 00:34:55.602 Removing: /var/run/dpdk/spdk_pid75601 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77419 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77540 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77549 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77561 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77607 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77611 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77623 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77668 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77672 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77684 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77729 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77733 00:34:55.602 Removing: /var/run/dpdk/spdk_pid77745 00:34:55.602 Removing: /var/run/dpdk/spdk_pid79125 00:34:55.602 Removing: /var/run/dpdk/spdk_pid79211 00:34:55.602 Removing: /var/run/dpdk/spdk_pid80612 00:34:55.602 Removing: /var/run/dpdk/spdk_pid82351 00:34:55.602 Removing: /var/run/dpdk/spdk_pid82409 00:34:55.602 Removing: /var/run/dpdk/spdk_pid82473 00:34:55.865 Removing: /var/run/dpdk/spdk_pid82577 00:34:55.865 Removing: /var/run/dpdk/spdk_pid82662 00:34:55.865 Removing: /var/run/dpdk/spdk_pid82754 00:34:55.865 Removing: /var/run/dpdk/spdk_pid82817 00:34:55.865 Removing: /var/run/dpdk/spdk_pid82881 00:34:55.865 Removing: /var/run/dpdk/spdk_pid82980 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83066 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83156 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83214 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83286 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83385 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83466 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83556 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83613 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83683 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83782 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83868 00:34:55.865 Removing: /var/run/dpdk/spdk_pid83953 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84010 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84079 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84143 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84206 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84304 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84389 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84473 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84536 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84599 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84668 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84731 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84834 00:34:55.865 Removing: /var/run/dpdk/spdk_pid84914 00:34:55.865 Removing: /var/run/dpdk/spdk_pid85052 00:34:55.865 Removing: /var/run/dpdk/spdk_pid85325 00:34:55.865 Removing: /var/run/dpdk/spdk_pid85352 00:34:55.865 Removing: /var/run/dpdk/spdk_pid85786 00:34:55.865 Removing: /var/run/dpdk/spdk_pid85960 00:34:55.865 Removing: /var/run/dpdk/spdk_pid86052 00:34:55.865 Removing: /var/run/dpdk/spdk_pid86152 00:34:55.865 Removing: /var/run/dpdk/spdk_pid86194 00:34:55.865 Removing: /var/run/dpdk/spdk_pid86214 00:34:55.865 Removing: /var/run/dpdk/spdk_pid86518 00:34:55.865 Removing: /var/run/dpdk/spdk_pid86556 00:34:55.865 Removing: /var/run/dpdk/spdk_pid86614 00:34:55.865 Removing: /var/run/dpdk/spdk_pid86972 00:34:55.865 Removing: /var/run/dpdk/spdk_pid87110 00:34:55.865 Removing: /var/run/dpdk/spdk_pid87914 00:34:55.865 Removing: /var/run/dpdk/spdk_pid88035 00:34:55.865 Removing: /var/run/dpdk/spdk_pid88182 00:34:55.865 Removing: /var/run/dpdk/spdk_pid88267 00:34:55.865 Removing: /var/run/dpdk/spdk_pid88576 00:34:55.865 Removing: /var/run/dpdk/spdk_pid88835 00:34:55.865 Removing: /var/run/dpdk/spdk_pid89177 00:34:55.866 Removing: /var/run/dpdk/spdk_pid89342 00:34:55.866 Removing: /var/run/dpdk/spdk_pid89512 00:34:55.866 Removing: /var/run/dpdk/spdk_pid89549 00:34:55.866 Removing: /var/run/dpdk/spdk_pid89741 00:34:55.866 Removing: /var/run/dpdk/spdk_pid89756 00:34:55.866 Removing: /var/run/dpdk/spdk_pid89800 00:34:55.866 Removing: /var/run/dpdk/spdk_pid90047 00:34:55.866 Removing: /var/run/dpdk/spdk_pid90261 00:34:55.866 Removing: /var/run/dpdk/spdk_pid91004 00:34:55.866 Removing: /var/run/dpdk/spdk_pid91781 00:34:55.866 Removing: /var/run/dpdk/spdk_pid92519 00:34:55.866 Removing: /var/run/dpdk/spdk_pid93383 00:34:55.866 Removing: /var/run/dpdk/spdk_pid93508 00:34:55.866 Removing: /var/run/dpdk/spdk_pid93580 00:34:55.866 Removing: /var/run/dpdk/spdk_pid93917 00:34:55.866 Removing: /var/run/dpdk/spdk_pid93970 00:34:55.866 Removing: /var/run/dpdk/spdk_pid94618 00:34:55.866 Removing: /var/run/dpdk/spdk_pid95036 00:34:55.866 Removing: /var/run/dpdk/spdk_pid95825 00:34:55.866 Removing: /var/run/dpdk/spdk_pid95953 00:34:55.866 Removing: /var/run/dpdk/spdk_pid95978 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96042 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96087 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96144 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96339 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96409 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96473 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96524 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96558 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96616 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96747 00:34:55.866 Removing: /var/run/dpdk/spdk_pid96939 00:34:55.866 Removing: /var/run/dpdk/spdk_pid97524 00:34:55.866 Removing: /var/run/dpdk/spdk_pid98291 00:34:55.866 Removing: /var/run/dpdk/spdk_pid98724 00:34:55.866 Removing: /var/run/dpdk/spdk_pid99407 00:34:55.866 Clean 00:34:56.128 01:20:18 -- common/autotest_common.sh@1453 -- # return 0 00:34:56.128 01:20:18 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:34:56.129 01:20:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.129 01:20:18 -- common/autotest_common.sh@10 -- # set +x 00:34:56.129 01:20:18 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:34:56.129 01:20:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.129 01:20:18 -- common/autotest_common.sh@10 -- # set +x 00:34:56.129 01:20:18 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:56.129 01:20:18 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:34:56.129 01:20:18 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:34:56.129 01:20:18 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:34:56.129 01:20:18 -- spdk/autotest.sh@398 -- # hostname 00:34:56.129 01:20:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:56.390 geninfo: WARNING: invalid characters removed from testname! 00:35:22.986 01:20:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:25.536 01:20:47 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:27.457 01:20:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:30.779 01:20:53 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:32.693 01:20:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:35.241 01:20:57 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:35:37.788 01:21:00 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:35:37.788 01:21:00 -- spdk/autorun.sh@1 -- $ timing_finish 00:35:37.788 01:21:00 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:35:37.788 01:21:00 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:35:37.788 01:21:00 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:35:37.788 01:21:00 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:37.788 + [[ -n 5761 ]] 00:35:37.788 + sudo kill 5761 00:35:37.799 [Pipeline] } 00:35:37.815 [Pipeline] // timeout 00:35:37.820 [Pipeline] } 00:35:37.834 [Pipeline] // stage 00:35:37.841 [Pipeline] } 00:35:37.856 [Pipeline] // catchError 00:35:37.866 [Pipeline] stage 00:35:37.868 [Pipeline] { (Stop VM) 00:35:37.881 [Pipeline] sh 00:35:38.167 + vagrant halt 00:35:40.715 ==> default: Halting domain... 00:35:47.319 [Pipeline] sh 00:35:47.600 + vagrant destroy -f 00:35:50.233 ==> default: Removing domain... 00:35:51.191 [Pipeline] sh 00:35:51.477 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:35:51.488 [Pipeline] } 00:35:51.504 [Pipeline] // stage 00:35:51.510 [Pipeline] } 00:35:51.525 [Pipeline] // dir 00:35:51.531 [Pipeline] } 00:35:51.546 [Pipeline] // wrap 00:35:51.552 [Pipeline] } 00:35:51.564 [Pipeline] // catchError 00:35:51.574 [Pipeline] stage 00:35:51.576 [Pipeline] { (Epilogue) 00:35:51.590 [Pipeline] sh 00:35:51.875 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:35:57.168 [Pipeline] catchError 00:35:57.170 [Pipeline] { 00:35:57.184 [Pipeline] sh 00:35:57.469 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:57.469 Artifacts sizes are good 00:35:57.480 [Pipeline] } 00:35:57.494 [Pipeline] // catchError 00:35:57.505 [Pipeline] archiveArtifacts 00:35:57.513 Archiving artifacts 00:35:57.609 [Pipeline] cleanWs 00:35:57.622 [WS-CLEANUP] Deleting project workspace... 00:35:57.622 [WS-CLEANUP] Deferred wipeout is used... 00:35:57.630 [WS-CLEANUP] done 00:35:57.632 [Pipeline] } 00:35:57.647 [Pipeline] // stage 00:35:57.653 [Pipeline] } 00:35:57.666 [Pipeline] // node 00:35:57.672 [Pipeline] End of Pipeline 00:35:57.713 Finished: SUCCESS